How do I use external crates in Rust? - libraries

I'm trying to work with the rust-http library, and I'd like to use it as the basis for a small project.
I have no idea how to use something that I can't install via rustpkg install <remote_url>. In fact, I found out today that rustpkg is now deprecated.
If I git clone the library and run the appropriate make commands to get it built, how do I use it elsewhere? I.e. how do I actually use extern crate http?

Since Rust 1.0, 99% of all users will use Cargo to manage the dependencies of a project. The TL;DR of the documentation is:
Create a project using cargo new
Edit the generated Cargo.toml file to add dependencies:
[dependencies]
old-http = "0.1.0-pre"
Access the crate in your code:
Rust 2021 and 2018
use old_http::SomeType;
Rust 2015
extern crate old_http;
use old_http::SomeType;
Build the project with cargo build
Cargo will take care of managing the versions, building the dependencies when needed, and passing the correct arguments to the compiler to link together all of the dependencies.
Read The Rust Programming Language for further details on getting started with Cargo. Specifying Dependencies in the Cargo book has details about what kinds of dependencies you can add.

Update
For modern Rust, see this answer.
Original answer
You need to pass the -L flag to rustc to add the directory which contains the compiled http library to the search path. Something like rustc -L path-to-cloned-rust-http-repo/build your-source-file.rs should do.
Tutorial reference

Not related to your post, but it is to your title. Also, cargo based.
Best practice:
external crate named foo
use ::foo;
module (which is part of your code/crate) named foo
use crate::foo;
In both the cases, you can use use foo; instead, but it can lead to confusion.

Once you've built it, you can use the normal extern crate http; in your code. The only trick is that you need to pass the appropriate -L flag to rustc to tell it where to find libhttp.
If you have a submodule in your project in the rust-http directory, and if it builds into its root (I don't actually know where make in rust-http deposits the resulting library), then you can build your own project with rustc -L rust-http pkg.rs. With that -L flag, the extern crate http; line in your pkg.rs will be able to find libhttp in the rust-http subfolder.

I ran into a similar issue. I ended up doing this in my Cargo.toml
[dependencies]
shell = { git = "https://github.com/google/rust-shell" }
Then in my main.rs I was able to add this and compile with success. Note that this cargo package is a macro in my case. Often you will not want to have the #[macro_use] before the extern call.
#[macro_use] extern crate shell;

Related

avoid using sudo to use z3++.h as a lib

I am using the z3prover first time, after reading most of related answers, I have noticed that I need to try: sudo make install .How could I skip the link z3 in /usr/bin and /usr/lib to use z3++.h in my own c++ project. (bcs I have noticed not everyone has the sudoer, I hope my code would goes well without sudoer.
You do need to compile the z3 source code if you want to be able to use it in your C/C++ projects. Compiling it will give you the library to link against. If you just download the source code, you can find the headers but you cannot link and hence cannot create your own executables.
But doing so does not require sudo access at all. The proper way to do so is actually explained in the https://github.com/Z3Prover/z3 page, right in the README. Roughly, they go like this:
python scripts/mk_make.py --prefix=/home/leo
cd build
make
make install
Note in the prefix parameter of the first line you tell z3 where to install everything. Change that path to a place where you have write-access. This way you do not need sudo access.
In order to compile your project successfully, you need to tell your compiler where to look for dynamic libraries and header files. Ask separately if you run into issues.
If you use GCC as the compiler you should add -I option for your project as follows :
g++ -Iz3_path/include -Lz3_path/lib -lz3

Upgrading to Angular Material 5

I am using a beta 8 of Angular Material in my project. It was never been updated since then. Now that v5 is available, we'd like to do so. What things should I consider while doing so? Any information or pointer to it would be really helpful.
Thanks!
What you have to do is the following:
Since MaterialModule has been removed, you have to import the modules required for your application separately to either another module (such as MyMaterialModule and making sure that it is after BrowserModule), a const, or adding the modules individually after BrowserModule. For more info, visit the docs.
The md selectors have been removed and replaced with mat (as well as classes). To update, install material-prefix-updater from npm:
npm i -g angular-material-prefix-updater
After npm has installed it globally, run this command on your app's root directory (where the -p parameter is the app's tsconfig.json):
mat-switcher -p src/app/tsconfig.json # Or wherever tsconfig.json is
For more info, visit the source code or use the --help parameter.
Note: You no longer need to specify MATERIAL_COMPATIBILITY_MODE.

How to install waf?

I have cloned and built the waf script using:
./waf-light configure
Then to build my project (provided by Gomspace) I need to add waf and the eclipse.py to my path. So far I haven't found better than this setenv script:
WAFROOT=~/git/waf/
export PYTHONPATH=$WAFROOT/waflib/extras/:$PYTHONPATH
export PATH=~/git/waf/:$PATH
Called with:
source setenv
This is somehow a pretty ugly solution. Is there a more elegant way to install waf?
You don't install waf. The command you found correctly builds waf: /waf-light configure build Then for each project you create, you put the built waf script into that projects root directory. I can't find a reference, but this is the way in which waf:s primary author Thomas Nagy wants the tool to be used. Projects that repackage waf to make the tool installable aren't "officially sanctioned."
There are advantages and disadvantages with non-installation:
Disadvantages:
You have to add the semi-binary 100kb large waf file to your repository.
Because the file contains binary code, people can have legal objections to distributing it.
Advantages:
It doesn't matter if new versions of waf break the old API.
Users don't need to install waf before compiling the project -- having Python on the system is enough.
Fedora (at least Fedora 22) has a yum package for waf, so you could see that it's possible to do a system install of waf, albeit with a hack.
After you run something like python3 ./waf-light configure build, you'll get a file called waf that's actually a Python script with some binary data at the end. If you put it into /usr/bin and run it as non-root, you'll get an error because it fails to create a directory in /usr/bin. If you run it as root, you'll get the new directory and /usr/bin/waf runs normally.
Here's the trick that I learned from examining the find_lib() function in the waf Python script.
Copy the waf to /usr/bin/waf
As root, run /usr/bin/waf. Notice that it creates a directory. You'll see something like /usr/bin/.waf-2.0.19-b2f63c807a4215294bf6005410c74c18
mv that directory to /usr/lib, dropping the . in the directory name, e.g. mv /usr/bin/.waf-2.0.19-b2f63c807a4215294bf6005410c74c18 /usr/lib/waf-2.0.19-b2f63c807a4215294bf6005410c74c18
If you want to use waf with Python3, repeat Steps 2-3 running the Python script /usr/bin/waf under Python3. Under Python3, the directory names will start with .waf3-/waf3- instead instead of .waf-/waf-.
(Optional) Remove the binary data at the end of /usr/bin/waf.
Now, non-root should be able to just use /usr/bin/waf.
That said, here's something to consider, like what another answer said: I believe waf's author intended waf to be embedded in projects so that each project can use its own version of waf without fear that a project will fail to build when there are newer versions of waf. Thus, the one-global-version use case seems to be not officially supported.

How do I build OpenCV's single module (e.g. legacy) for my development in Ubuntu?

I revised some source code of OpenCV's legacy module. My project is coded based on the new legacy module and I want to debugging it. One way that I can think out is using my revised legacy source code to replace the original legacy module, then compile and reinstall the whole OpenCV. I think this way is too waste of time and other modules(e.g. imgproc, highgui, etc) haven't any change and don't need to be reinstalled. I think there must have an easier way to solve it.
My developing environment is VIM && GDB && opencv 2.4.13. I am a green-hand of linux-opencv. How do I build OpenCV's single module in Ubuntu? If I want keep the original legacy module and the new legacy module in case of other usage, how should I do?
There is a way to actually build a single module without building the entire source from the beginning. I guess you would have followed this procedures to install OpenCV in your machine OpenCV Installation in Linux.
After the modifications in OpenCV's legacy module, you can build just that module by using the commands.
# Go to the build directory. If you followed the tutorial I shared then it is the release folder
cd release
# And build only the module you want to (For example, legacy model as in your case)
make opencv_legacy
sudo make install

How do I install a project built with bazel?

I am working on a project that is transitioning from CMake to Bazel. One critical feature that we are apparently losing in the bargain is the ability to install the project, so that it can be used by other (not necessarily Bazel) projects.
AFAICT, there is currently no built in support for installing a project?!
I need to create a portable (must work on at least Linux and MacOS) way to install the project. Specifically:
I need to be able to specify libraries, headers, executables, and other files (e.g. LICENSE) that need to be installed.
The user needs to be able to specify an absolute prefix where things should be installed.
I really, really should be able to execute the "install" step more than once, giving different prefixes each time, without Bazel getting confused (i.e. it must not try to "remember" what files it already installed, or if it does, must understand when the prefix is different from last time).
Libraries should be installed to the right place (e.g. lib64), or at least it should be possible for the user to specify the correct libdir.
The install step MUST NOT touch the time stamp on any file from a previous install that has not changed. (Ideally, Bazel itself would handle this; using the install command is tricky and has potential portability issues. Note platform requirements, above.)
What is the best way to go about doing this?
Unless you want to do specific package (e.g. deb or rpm), you probably want to create an executable rule that does the install for you.
You can create a rule that would create an executable (e.g. a shell script) that does the install for you (e.g. do checksums to check if there are change to the installed file and does the actual copy of the files if they have changed). You would have to use the extension language to do, that would look similar to what the docker rules does to load an image with the incremental loader
Addition: I forgot to say that the install itself would be run by using the run command: bazel run install if the rule is named install in the top level BUILD file.

Resources