Trying to use Fuse to mount HDFS. Can't compile libhdfs - ant

I'm attempting to compile libhdfs (a native shared library that allows external apps to interface with hdfs). It's one of the few steps I have to take to mount Hadoop's hdfs using Fuse.
The compilation seems to go well for a while but finishes with "BUILD FAILED" and the following problems summary -
commons-logging#commons-logging;1.0.4: configuration not found in commons-logging#commons-logging;1.0.4: 'master'. It was required from org.apache.hadoop#Hadoop;working#btsotbal800 commons-logging
log4j#log4j;1.2.15: configuration not found in log4j#log4j;1.2.15: 'master'. It was required from org.apache.hadoop#Hadoop;working#btsotbal800 log4j
Now, I have a couple questions about this, in that the book which I'm using to do this doesn't go into any details about what these things really are.
Are commons-logging and log4j libraries which Hadoop uses?
These libraries seem to live in $HADOOP_HOME/lib. They are jar files though. Should I extract them, try to change some configurations, and then repack them back into a jar?
What does 'master' in the errors above mean? Are there different versions of the libraries?
Thank you in advance for ANY insight you can provide.

If you are using cloudera hadoop(cdh3u2), you dont need to build the fuse project.
you can find the binary(libhdfs.so*) inside the directory $HADOOP_HOME/c++/lib
Before fuse mount update the "$HADOOP_HOME/contrib/fuse-dfs/src/fuse_dfs_wrapper.sh" as follows
HADOOP_HOME/contrib/fuse-dfs/src/fuse_dfs_wrapper.sh
#!/bin/bash
for f in ${HADOOP_HOME}/hadoop*.jar ; do
export CLASSPATH=$CLASSPATH:$f
done
for f in ${HADOOP_HOME}/lib/*.jar ; do
export CLASSPATH=$CLASSPATH:$f
done
export PATH=$HADOOP_HOME/contrib/fuse-dfs:$PATH
export LD_LIBRARY_PATH=$HADOOP_HOME/c++/lib:/usr/lib/jvm/java-6-sun-1.6.0.26/jre/lib/amd64/server/
fuse_dfs $#
LD_LIBRARY_PATH contains the list of directories here
"$HADOOP_HOME/c++/lib" contains libhdfs.so and
"/usr/lib/jvm/java-6-sun-1.6.0.26/jre/lib/amd64/server/" contains libjvm.so
\# modify /usr/lib/jvm/java-6-sun-1.6.0.26/jre/lib/amd64/server/ as your java_home
Use the following command for mounting hdfs
fuse_dfs_wrapper.sh dfs://localhost:9000/ /home/510600/mount1
for unmounting use the following command
fusermount -u /home/510600/mount1
I tested fuse only in hadoop pseudo mode not in cluster mode

Related

Install dependencies in container using deps.edn

I inherited a clojure code base and I'm trying to containerize it for local development. The creators used deps.edn to manage the dependencies. However, I can't figure out what RUN command I should use to pre-install the dependencies for the project.
Currently, my entrypoint is the following ['clj', '-m', 'app'] which installs the dependencies every time I start the container.
How do I pre-install dependencies for a clojure project using a Docker RUN command?
Deps/CLI caching is described here. Generally speaking, dependencies are downloaded once and saved in a subdirectory of the project directory named
./.cpcache # "class path cache"
The ./.cpcache directory is analagous to the ~/.m2 cache directory used by Maven and related tools (e.g. Leiningen).
If you run the code locally, you should be able to copy the .cpcache dir with its cached dependencies into your Docker container. Then the dependencies don't need to be re-downloaded
for each startup of the Docker container.
See also the Deps/CLI overview.
P.S.
This template project is set up to run using both lein and Deps/CLI via the Kaocha tool. You may find the comparison helpful.
P.P.S.
You may find it easiest to run your code by building an uberjar file which contains all your code and all
dependencies in a single artifact. You can do this either using Leiningen or other tools such as depstar. You then invoke the application with a single command like:
java -jar demo-0.1.0-standalone.jar
Running this should do it:
clj -P

Build BigBlueButton from dockerfile

I want to build BigBlueButton system from the Dockerfiles.
I have tried some times with exist Dockerfiles in source code, but it have not succeeded.
I found many BigBlueButton components in source code, but I don't know the order for building components.
This is the source link and I have listed some components that have Dockerfiles.
https://github.com/bigbluebutton/bigbluebutton/
akka-bbb-apps
akka-bbb-fsesl
bbb-common-message
bbb-common-web
bbb-fsesl-client
bbb-libreoffice
bbb-lti
bbb-webhooks
bigbluebutton-html5
bigbluebutton-web
What is the first packet had to build? And what is the next?
A good resource is the lab build for kubernettes which has a lot of information for how to build the bbb modules.
From the labs directory in the main source from makefile I would try the following order
bbb-common-message, bbb-common-web, bbb-fsesl-client, akka-bbb-apps, akka-bbb-fsesl, bigbluebutton-web, bigbluebutton-html5, bbb-webrtc-sfu, bbb-webhooks
You will need all the build dependencies to be installed like sbt for the scala modules.
If you just want to work on the html5 client or custom integrations then the docker-dev is a good option, not least as it sets up nginx with certs so you can develop locally with https.
I would suggest setting up the docker-dev to get insight on how the modules interact if you are not familiar with the bbb architecture.

How can I unblacklist 'libnvomx.so', in order to resolve "no such element factory 'omxh264enc'!"?

(Background: In a docker container on a NVidia Jetson TX2 board I have decompressed NVidia's Linux For Tegra tarball which contains lots of drivers and shared object files, some of which provide GStreamer element factories which produce elements that I use in my GStreamer pipeline. I am trying to run the pipeline in the docker container.)
However, there is an element in my GStreamer pipeline (on this Tegra board), called 'omxh264enc', which I haven't been able to create.
I've put the corresponding 'libnvomx.so' in my drivers folder which is in the exported paths GST_PLUGIN_PATH and the LD_LIBRARY_PATH.
ldd -r does not show any missing libraries for libnvomx.so
HOWEVER when I try and run the pipeline, output includes
WARN omx gstomx.c:2826:plugin_init: Failed to load configuration file: Valid key file could not be found in search dirs (searched in: /root/.config:/etc/xdg as per GST_OMX_CONFIG_DIR environment variable, the xdg user config directory (or XDG_CONFIG_HOME) and the system config directory (or XDG_CONFIG_DIRS)
INFO omx gstomx.c:2831:plugin_init: Using default configuration
ERROR omx gstomx.c:2894:plugin_init: Core '/usr/lib/aarch64-linux-gnu/tegra/libnvomx.so' does not exist for element 'omxh264enc'
WARN GST_PLUGIN_LOADING gstplugin.c:526:gst_plugin_register_func: plugin "/gst_1.8.3/libs/gstreamer-1.0/libnvomx.so" failed to initialise
and when I use GST_DEBUG=3 gst-inspect-1.0 libnvomx.so, libnvomx.so is blacklisted.
Plugin Details:
Name libnvomx.so
Description Plugin for blacklisted file
Filename /gst_1.8.3/libs/gstreamer-1.0/libnvomx.so
Version 0.0.0
License BLACKLIST
Source module BLACKLIST
Binary package BLACKLIST
Origin URL BLACKLIST
I have copied libnvomx.so into /usr/lib/aarch64-linux-gnu/tegra but this did not make a difference (probably because libnvomx is blacklisted).
I don't know where to find the gstomx.conf file, where maybe I can change the path /usr/lib/aarch64-linux-gnu/tegra/libnvomx.so to my designated drivers folder (/gst_1.8.3/libs/gstreamer-1.0/). I have used 'find' on /etc and some other folders but didn't find it (I didn't actually find a .config folder on the system).
(There are also other plugins with missing symbols, nvidia_drv.so with undefined symbol TimerSet and libglx.so with undefined symbol serverClient. I would like to find out what is suppose to provide these symbols? But these are not (direct) dependencies of libnvomx.so)
So how can I initialise / unblacklist libnvomx.so so I can use 'omxh264enc'? Do I need to find / make a gstomx configuration file or can I make it work with the default configuration? I read somewhere there may be a solution using a 'symlink' but at the moment I'm not familiar with what these are or how these work.
Let me know if you need more info, thanks.

Cryptogen tool not found. Hyperledger fabric installation in Windows 10

I followed the Hyperledger fabric documentation to install and configure it in Windows 10. However when I run the command - "./byfn.sh -m generate" for first-network sample application, I get the following error,
I have gone thru all StackOverflow questions regarding this and made sure following steps are done,
Have set the $PATH variable correctly to include bin folder.
Have downloaded the platform-specific binary and my bin folder looks like this,
I have doubts about following steps,
I have installed Docker for Windows and was able to verify the docker installation by running hello-world image in Docker. However, I have not shared any of my local drives in Docker. Not sure whether this is the cause of this error.
Please note that this is my first question in StackOverflow. Forgive me for any mistakes/redundancies. Any help is greatly appreciated.
I'd suggest making sure that you run the script to download / install the binaries and images from within the fabric-samples directory.
The $Path is exported every time you run the byfn.sh script, confirm that the path configuration in the byfn.sh is correct and points to your correct bin location
# prepending $PWD/../bin to PATH to ensure we are picking up the correct binaries
# this may be commented out to resolve installed version of tools if desired
export PATH=${PWD}/../../bin:${PWD}:$PATH
export FABRIC_CFG_PATH=${PWD}

How to install waf?

I have cloned and built the waf script using:
./waf-light configure
Then to build my project (provided by Gomspace) I need to add waf and the eclipse.py to my path. So far I haven't found better than this setenv script:
WAFROOT=~/git/waf/
export PYTHONPATH=$WAFROOT/waflib/extras/:$PYTHONPATH
export PATH=~/git/waf/:$PATH
Called with:
source setenv
This is somehow a pretty ugly solution. Is there a more elegant way to install waf?
You don't install waf. The command you found correctly builds waf: /waf-light configure build Then for each project you create, you put the built waf script into that projects root directory. I can't find a reference, but this is the way in which waf:s primary author Thomas Nagy wants the tool to be used. Projects that repackage waf to make the tool installable aren't "officially sanctioned."
There are advantages and disadvantages with non-installation:
Disadvantages:
You have to add the semi-binary 100kb large waf file to your repository.
Because the file contains binary code, people can have legal objections to distributing it.
Advantages:
It doesn't matter if new versions of waf break the old API.
Users don't need to install waf before compiling the project -- having Python on the system is enough.
Fedora (at least Fedora 22) has a yum package for waf, so you could see that it's possible to do a system install of waf, albeit with a hack.
After you run something like python3 ./waf-light configure build, you'll get a file called waf that's actually a Python script with some binary data at the end. If you put it into /usr/bin and run it as non-root, you'll get an error because it fails to create a directory in /usr/bin. If you run it as root, you'll get the new directory and /usr/bin/waf runs normally.
Here's the trick that I learned from examining the find_lib() function in the waf Python script.
Copy the waf to /usr/bin/waf
As root, run /usr/bin/waf. Notice that it creates a directory. You'll see something like /usr/bin/.waf-2.0.19-b2f63c807a4215294bf6005410c74c18
mv that directory to /usr/lib, dropping the . in the directory name, e.g. mv /usr/bin/.waf-2.0.19-b2f63c807a4215294bf6005410c74c18 /usr/lib/waf-2.0.19-b2f63c807a4215294bf6005410c74c18
If you want to use waf with Python3, repeat Steps 2-3 running the Python script /usr/bin/waf under Python3. Under Python3, the directory names will start with .waf3-/waf3- instead instead of .waf-/waf-.
(Optional) Remove the binary data at the end of /usr/bin/waf.
Now, non-root should be able to just use /usr/bin/waf.
That said, here's something to consider, like what another answer said: I believe waf's author intended waf to be embedded in projects so that each project can use its own version of waf without fear that a project will fail to build when there are newer versions of waf. Thus, the one-global-version use case seems to be not officially supported.

Resources