Generating LLVM-IR in standalone process using C++ - clang

In my project work, I have been working on developing a language server to perform static analysis on the C/C++ source code. This process requires LLVM-IR which is analysed to perform static analysis checks.
I have been using the below command to generate the LLVM-IR code for my C/C++ program. Which is hard coded and called via system() command.
clang -O3 -emit-llvm hello.c -c -o hello.bc.
This above usage opens for security vulnerability, hence needs to be avoid. My current task is to use the C/C++ API provided by the LLVM infrastructure to convert source file to LLVM-IR in the standalone process.
Being new to the LLVM repository, I am finding hard to find the API usage and example program for the same. I would appreciate if anyone would help how to proceed in solving the problem.
Thanks in advance.

Related

Intermediate Files formed during compilation of DART code

I want to see intermediate files which are formed during compilation for DART code, what command should I run in cmd?
Example: For hello.c file, I can run gcc -Wall --verbose --save-temps -o first-exec hello.c to see intermediate files which will be formed like hello.s, hello.s, hello.o.
I want to see similar assembly code files formed during DART compilation? Any command through which I can see it ?
I don't think the native Dart compiler creates any files. It's handled entirely in-memory. The web-compilers are different because they actually do create .js files.
The VM definitely do not create assembler files, its built-in assembler writes directly to memory. There is not a separate assembler, like cpp's as or gas, which is run on an intermediate result.
Even the AOT native compilation does not have that kind of intermediate step.
You can choose to compile to a jit-snapshot or kernel file using, e.g., dart compile kernel. That shows you the intermediate Dart format.

How to compile PyPy for OpenWrt?

I'm trying to compile PyPy for use on an OpenWrt configuration, but I am having a really hard time doing it.
My main problems are:
Each time I change the Makefile I am forced to start the translating process of PyPy again. Is there a way to avoid this?
Would copying just the compiled pypy-c and lib*.so binaries do it, or would I have to copy everything from the compiled files of PyPy?
Here is the directory structure after running make on the files.
How do I specify the version of GCC to use? I've tried to do this without success.
How would I get the interpreter to run after installing it on OpenWrt?
One can also compile PyPy using PyPy itself, which gives a .tar file with a different structure (no Makefile, pypy executable etc). Can I use that instead of building it from the source?
Here is my Makefile.
include $(TOPDIR)/rules.mk
PKG_NAME:=PyPy
PKG_VERSION:=5.8.0
PKG_RELEASE:=1
PKG_MD5SUM:=504c2d522595baf8775ae1045a217a2b120732537861d31b889d47c340b58bd5
PKG_SOURCE_NAME:=pypy2
PKG_SOURCE_URL:=https://bitbucket.org/pypy/pypy/downloads/
PKG_SOURCE:=$(PKG_SOURCE_NAME)-v$(PKG_VERSION)-src.tar.bz2
PKG_BUILD_DEPENDS:=python
PKG_CAT:=bzcat
PKG_BUILD_DIR:=$(BUILD_DIR)/$(BUILD_VARIANT)$(PKG_SOURCE_NAME)-v$(PKG_VERSION)-src
include $(INCLUDE_DIR)/package.mk
$(call include_mk, python-package.mk)
define Package/PyPy
SECTION:=utils
CATEGORY:=Utilities
TITLE:=PyPy
URL:=https://pypy.org/index.html
DEPENDS:=+libffi +libexpat +libunwind
endef
define Package/PyPy/description
PyPy is an alternate implementation of the Python programming language written in Python.
This distribution is for Linux architecture, using Python 2.
endef
define Package/PyPy/install
$(INSTALL_DIR) $(1)/usr/bin
$(INSTALL_BIN) $(PKG_BUILD_DIR)/pypy-c $(1)/usr/bin
$(CP) (PKG_BUILD_DIR)/*.so $(1)/usr/lib
$(INSTALL_BIN) $(PKG_BUILD_DIR)/pypy/bin/* $(1)/usr/bin
endef
define Build/Compile
$(MAKE) -C $(PKG_BUILD_DIR)
endef
$(eval $(call BuildPackage,PyPy))
I would need to see the entire process: the part of the Makefile above, the other pieces such as rules.mk, package.mk, command line arguments and environmental variables in order to understand what is not working
The Makefile in question is not supported, the developers do not use it, and as you discovered it does not work well. As described in the build page, building PyPy from source is a four part process, and the Makefile mashes three of those stages together so any changes currently require restarting from 0. Note that the underlying build process uses PYPY_USESSION_DIR, PYPY_USESSION_BASENAME, and a numbering system to ensure that each time the first stage (translation from RPython to C) puts the results in a new, clean directory. For this reason if you restart the process you will lose your previous work.
Python is both a binary interpreter and a stdlib of "battery included" modules. If you wish to use the binary interpreter, you need to install the binary and the stdlib support files together since they work as a unit. That is why we specify a fourth step in the build page, packaging. Please read that section carefully for methods of using the interpreter+stdlib
PyPy does not really support cross compiling, one method that used to work is described in the documentation of RPython. PyPy does support native gcc and/or clang. What compiler are you trying to use on what hardware?
Run the interpreter as you would any python interpreter; specify the path to the executable file, which (as stated above) needs to know how to find the support files such as the stdlib and the site-packages directory of third-party modules installed specifically for PyPy.
Using PyPy to build PyPy should be no different than using CPython to build PyPy, except it will be much faster. The differences you see must be an artifact of how you are building.
I would recommend you NOT use the unsupported Makefile until you understand the build, compile, package, install process, and then once you have a working installation help the PyPy project improve the process until it can be automated into a Makefile
If you are running OpenWRT on supported hardware and operating system, you would be much happier using a binary distribution and not compiling from source. In this case your Makefile should download a binary distribution and simply install it.
If you are using a different CPU, chances are PyPy will not work out-of-the-box on your hardware, you will have to run tests to make sure the JIT will actually make things faster.

Install Z3 in Windows

I did download the file Z3 4.3.0 for Windows (64 bits) that is on site: http://z3.codeplex.com/releases.
When I try to run the file z3.exe which is in the bin folder. Prompt appears and disappears immediately. I needed to know how to run a file written in z3 through z3.exe file.
How can I do this? Or what is the best option to run z3 through Java?
z3.exe is a command line tool. To execute a SMT-LIB 2.0 file called file.smt2, you should execute the following command in the Command Prompt.
z3 file.smt2
If the directory containing z3.exe is not in your PATH environment variable, you will have to include the directory in the command above.
BTW, Z3 has not graphical user interface or environment. It is essentially a library for automated reasoning. z3.exe is a simple executable built using this library that allows us execute commands stored in a file.
You can also play with Z3 using the web interface available at rise4fun.
At rise4fun, we have a SMT-LIB front-end, and a Python based one.
Both of them have interactive tutorials.
Here are some useful resources to learn about SMT:
Z3 tutorial
Tutorial on SMT-LIB
Article describing SMT applications
SMT-Lib benchmarks
Stackoverflow: you can search Z3 related question by including [z3] in the search box.
Z3 has APIs for several programming languages: C, C++, .Net, Python and OCaml.
In the next release, we will also provide support for Java.
You can already play with the Java by using one of the nightly builds.
Go here for more information about Z3 nightly builds.
The nightly builds contain a Java example application using the Z3 API.

Can the `erl -make` command be extended to do things other than compiling .erl files?

I wanted to integrate Elixir into our project, and the good old codes don't use rebar, so I think writing the rules for building .ex files into Emakefile may be a good idea, yet the man page here didn't mention anything relevant.
Edit:
Our team works mainly in Windows environment, but the deployment will be done on Linux servers, so I need a cross-platform solution. Since Erlang itself is cross-platform, I wanted to do it with erl -make command.
Of course I can write a Makefile, but then I'll need a build.bat or something alike to build the code in our developing environments, since we don't have make command on our dev' machines.
Anyone have a better idea?
Update:
In case anyone wants to know, I'm doing it this way:
Copy the lib/elixir directory in the Elixir source tree to our source dir, say some_project/src/tools/elixir.
Add some_project/src/tools/elixir/src/elixir_transform.erl and some_project/src/tools/elixir/src/* to the Emakefile, in that order. Set the output dir to some_project/ebin (All the other .beam files are located there).
Copy src/elixir.app.src in the Elixir source tree to some_project/ebin/elixir.app, and edit it to fix the version code.
Build the Erlang code by running erl -pa ebin -make, in some_project dir.
Build the Elixir compiler by running erl -pa ebin -s elixir_compiler core -s erlang halt
Now we have a working Elixir environment in our code, and I use the following escript to build our custom .ex files:
%%! -pa ./ebin
main(_) ->
ExList = [
<<"source_1.ex">>,
<<"source_2.ex">>,
<<"source_3.ex">>],
application:start(elixir),
gen_server:call(elixir_code_server, {compiler_options, [{docs, true}, {debug_info, true}]}),
[elixir_compiler:file_to_path(F, <<"./ebin">>) || F <- ExList],
erlang:halt(0).
If you want to explicitly compile Elixir, I would go with the Makefile approach since it will always be supported by Elixir. However, I would recommend the precompiled binaries or even assume Elixir is installed in each developer machine. You can even add a task to your Emakefile to guarantee everyone is using the proper Elixir version.
Finally, about compiling your own Elixir code, I would recommend simply using mix. mix is a binary that ships with Elixir and you can simply do ./src/tools/elixir/bin/mix compile from your Emakefile.
In case using mix is not possible, you should use the parallel compiler, since it will compile files using all cores available in your machine and it will automatically detect and solve dependency in between files. Here is an example of calling the parallel compiler from erlang:
https://github.com/basho/rebar/pull/347/files#L1R62
The API is very simple. It expects a list of file names to compile as binary and the directory to output files to as another binary.

Bullseye for EDK Based Application

Is there any way to integrate EDK2 based application to come into Bulls Eye Code Coverage tool? The executable will be having .efi extension, and it open's only in EFI shell. I believe, i also need to develop an driver to capture the coverage data into external disk for analysis, as the application will be valid only in the EFI Shell(BIOS).
So, does anyone have any idea, on how it can be done, for overall EFI Applications?
Thanks,
Pramod
These are the steps I took to integrate Bullseye into EDK2 (after installing Bullseye following the instruction on the website):
Use the /run/libcov-printf.c and replace the "printf" with "AsciiPrint".
Copy all the files from /run/ to your EDK $WORKSPACE
Create library .inf file and add libcov-printf.c to its [Source] section
Add library instance to your platform .dsc file (MdeModulePkg.dsc probably)
Add library reference to your application .inf file under [LibraryClass]
Add cov_dumpData() in your application source code (usually at its exit point).
compile using EDK2 build
You'll end up with an efi binary that you can run on your target and get the measurements once the application reaches its exit point.
The output data can be redirected to a file (using the > directive inside EDK2 shell) and then be processed by covpost to be merged into the cov file created in the build process.
Well I don't know how EFI works. However bullseye coverage support the customization. When I worked for the embedded system. I created open/read/write functions to write the coverage data not to the file but to the serial cable and I created the host application to interprete the data sent from serial cable and save them into cov file. I'm not sure if this works on your case well. However you can start from the following link.
http://www.bullseye.com/help/env-embedded.html

Resources