Cannot find package cid in $GOROOT or $GOPATH - docker

I am trying to customize tuna-app chaincode of the tuna-app example. I want to use cid package inside my chaincode to make ABAC decisions about who is allowed to run the chaincode. When I try to install chaincode, I get the following error:
Error: Error getting chaincode code chaincode:
Error getting chaincode package bytes: Error obtaining dependencies for github.com/hyperledger/fabric/core/chaincode/lib/cid:
<go, [list -f {{ join .Deps "\n"}} github.com/hyperledger/fabric/core/chaincode/lib/cid]>: failed with error: "exit status 1"
cannot load package: package github.com/hyperledger/fabric/core/chaincode/lib/cid: cannot find package "github.com/hyperledger/fabric/core/chaincode/lib/cid" in any of:
/opt/go/src/github.com/hyperledger/fabric/core/chaincode/lib/cid (from $GOROOT)
/opt/gopath/src/github.com/hyperledger/fabric/core/chaincode/lib/cid (from $GOPATH)
I am usind Docker to run peer, orderer, ca, and cli containers. The Docker image which is used to build chaincode is hyperledger/fabric-ccenv. This image is created using Dockerfile; the interesting line I found was:
ADD payload/goshim.tar.bz2 $GOPATH/src/
which adds the tar.bz2 inside the $GOPATH/src folder (I believe). The .tar.bz2 file contains all Go packages used by chaincode. I tried to insert the cid package and to create a new .tar.bz2 file with the package inside. Then I rebuilt the image. The image now contains the cid package, but I still get the same error.
Why is it still missing the package?

In the startFabric.sh from your tuna-app, you launch the cli container using:
docker-compose -f ./docker-compose.yml up -d cli
Have a look at the mounting declaration of the persistent volumes in your compose yaml file. You should see something like this because the tuna-app is based on fabcar from the fabric-samples:
./../chaincode/:/opt/gopath/src/github.com/
If you see this declaration, copy in your local machine the folder /hyperledger/fabric/core/chaincode/lib/cid into your chaincode folder. You should find it in chaincode/abac if you are using the last version of fabric samples (https://github.com/hyperledger/fabric-samples).

I think you should not create a new goshim.tar.bz2. If you think it is easier make sure cid is in the correct path within the archive, e.g. github.com/hyperledger/fabric/core/chaincode/lib/cid
To test this you can make a debug output:
ADD payload/goshim.tar.bz2 $GOPATH/src/
RUN ls $GOPATH/src/github.com/hyperledger/fabric/core/chaincode/lib/cid
I would recommend to download cid within the Dockerfile:
RUN go get -d github.com/hyperledger/fabric/core/chaincode/lib/cid

Related

How to access scala shell using docker image for spark

I just downloaded this docker image to set up a spark cluster with two worker nodes. Cluster is up and running however I want to submit my scala file to this cluster. I am not able to start spark-shell in this.
When I was using another docker image, I was able to start it using spark-shell.
Can someone please explain if I need to install scala separately in the image or there is a different way to start
UPDATE
Here is the error bash: spark-shell: command not found
bash: spark-shell: command not found
root#a7b0682ff17d:/opt/spark# ls /home/shangupta/Scripts/
ProfileData.json demo.scala queries.scala
TestDataGeneration.sql input.scala
root#a7b0682ff17d:/opt/spark# spark-shell /home/shangupta/Scripts/input.scala
bash: spark-shell: command not found
root#a7b0682ff17d:/opt/spark#
You're getting command not found because PATH isn't correctly established
Use the absolute path /opt/spark/bin/spark-shell
Also, I'd suggest packaging your Scala project as an uber jar to submit unless you have no external dependencies or like to add --packages/--jars manually

Using JMeter plugins with justb4/jmeter Docker image results in error

Goal
I am using Docker to run JMeter in Azure Devops. I am trying to use Blazemeter's Parallel Controller, which is not native to JMeter. So, according to the justb4/jmeter image documentation, I used the following command to get the image going and run the JMeter test:
docker run --name jmetertest -i -v /home/vsts/work/1/s/plugins:/plugins -v $ROOTPATH:/test -w /test justb4/jmeter ${#:2}
Error
However, it produces the following error while trying to accommodate for the plugin (I know the plugin makes the difference due to testing without the plugin):
cp: can't create '/test/lib/ext': No such file or directory
As far as I understand, this is an error produced when one of the parent directories of the directory you are trying to make does not exist. Is there something I am doing wrong, or is there actually something wrong with the image?
References
For reference, I will include links to the image documentation and the repository.
Image: https://hub.docker.com/r/justb4/jmeter
Repository: https://github.com/justb4/docker-jmeter
Looking into the Dockerfile:
ENV JMETER_HOME /opt/apache-jmeter-${JMETER_VERSION}
Looking into entrypoint.sh
if [ -d /plugins ]
then
for plugin in /plugins/*.jar; do
cp $plugin $(pwd)/lib/ext
done;
fi
It basically copies the plugins from /plugins folder (if it is present) to /lib/ext folder relative to current working directory
I don't know why did you add this stanza -w /test to your command line but it explicitly "tells" the container that local working directory is /test, not /opt/apache-jmeter-xxxx, that's why the script is failing to copy the files.
In general I don't think that the approach is very valid because:
In Azure DevOps you won't have your "local" folder (unless you want to add plugins binaries under the version control system)
Some JMeter Plugins have other .jars as the dependencies so when you're installing the plugin you should:
put the plugin itself under /lib/ext folder of your JMeter installation
put the plugin dependencies under /lib folder of your JMeter installation
So I would recommend amending the Dockerfile, download JMeter Plugins Manager and installed the plugin(s) you need from the command line
Something like:
RUN wget https://jmeter-plugins.org/get/ -O /opt/apache-jmeter-${JMETER_VERSION}/lib/ext/jmeter-plugins-manager.jar
RUN wget https://repo1.maven.org/maven2/kg/apc/cmdrunner/2.2/cmdrunner-2.2.jar -P /opt/apache-jmeter-${JMETER_VERSION}/lib/
RUN java -cp /opt/apache-jmeter-${JMETER_VERSION}/lib/ext/jmeter-plugins-manager.jar org.jmeterplugins.repository.PluginManagerCMDInstaller
RUN /opt/apache-jmeter-${JMETER_VERSION}/bin/./PluginsManagerCMD.sh install bzm-parallel

Trying to install logging on google cloud run but it's failing

I am trying to follow these instructions to log correctly from java to logback to cloudrun...
https://cloud.google.com/logging/docs/setup/java
If I used jdk8, I get alpn missing jetty issues so I moved to a Docker image openjdk:10-jre-slim
and my DockerFile is simple
FROM openjdk:10-jre-slim
RUN mkdir -p ./webpieces
COPY . ./webpieces/
COPY config/logback.cloudrun.xml ./webpieces/config/logback.xml
WORKDIR "/webpieces"
ENTRYPOINT ./bin/customerportal -http.port=:$PORT -hibernate.persistenceunit=cloud-production
AND the only difference is I switched the image from openjdk:8-jdk-alpine which worked fine!!!
When I deploy to google cloud I get this error...
Deploying container to Cloud Run service [staging-customerportal] in project [orderly-gcp] region [us-west1]
⠏ Deploying... Cloud Run error: Invalid argument error. Invalid ENTRYPOINT. [name: "gcr.io/orderly-gcp/customerportal2#sha256:6c1c2e7531684d8f50a3120f1de60cade841ab1d9069b
704ee3fd8499c5b7779"
error: "Invalid command \"/bin/sh\": file not found"
].
X Deploying... Cloud Run error: Invalid argument error. Invalid ENTRYPOINT. [name: "gcr.io/orderly-gcp/customerportal2#sha256:6c1c2e7531684d8f50a3120f1de60cade841ab1d9069b
704ee3fd8499c5b7779"
error: "Invalid command \"/bin/sh\": file not found"
].
. Routing traffic...
Deployment failed
ERROR: (gcloud.run.deploy) Cloud Run error: Invalid argument error. Invalid ENTRYPOINT. [name: "gcr.io/orderly-gcp/customerportal2#sha256:6c1c2e7531684d8f50a3120f1de60cade841ab1d9069b704ee3fd8499c5b7779"
error: "Invalid command \"/bin/sh\": file not found"
].
However, when I run locally to test, I get this error on project ID being required so it seems it is working. SIDE QUESTION: How to simulate this project ID so I can still run locally?
03:10:08,650 |-INFO in ch.qos.logback.core.joran.action.AppenderAction - Naming appender as [CLOUD]
03:10:09,868 |-ERROR in ch.qos.logback.core.joran.spi.Interpreter#14:13 - RuntimeException in Action for tag [appender] java.lang.IllegalArgumentException: A project ID is required for this service but could not be determined from the builder or the environment. Please set a project ID using the builder.
at java.lang.IllegalArgumentException: A project ID is required for this service but could not be determined from the builder or the environment. Please set a project ID using the builder.
at at com.google.common.base.Preconditions.checkArgument(Preconditions.java:142)
at at com.google.cloud.ServiceOptions.<init>(ServiceOptions.java:285)
at at com.google.cloud.logging.LoggingOptions.<init>(LoggingOptions.java:98)
at at com.google.cloud.logging.LoggingOptions$Builder.build(LoggingOptions.java:92)
at at com.google.cloud.logging.LoggingOptions.getDefaultInstance(LoggingOptions.java:52)
at at com.google.cloud.logging.logback.LoggingAppender.getLoggingOptions(LoggingAppender.java:246)
at at com.google.cloud.logging.logback.LoggingAppender.getProjectId(LoggingAppender.java:209)
at at com.google.cloud.logging.logback.LoggingAppender.start(LoggingAppender.java:194)
at at ch.qos.logback.core.joran.action.AppenderAction.end(AppenderAction.java:90)
at at ch.qos.logback.core.joran.spi.Interpreter.callEndAction(Interpreter.java:309)
at at ch.qos.logback.core.joran.spi.Interpreter.endElement(Interpreter.java:193)
at at ch.qos.logback.core.joran.spi.Interpreter.endElement(Interpreter.java:179)
at at ch.qos.logback.core.joran.spi.EventPlayer.play(EventPlayer.java:62)
at at ch.qos.logback.core.joran.GenericConfigurator.doConfigure(GenericConfigurator.java:165)
at at ch.qos.logback.core.joran.GenericConfigurator.doConfigure(GenericConfigurator.java:152)
at at ch.qos.logback.core.joran.GenericConfigurator.doConfigure(GenericConfigurator.java:110)
The Java 10 version is EOL, and the official images has been removed. More detail here
Prefer a Java 11 version.
Anyway, when you use version, some are optimized and does not install bash by default (for reducing their size) and you have to install it by yourselves.
For a local run, I don't recommend to use a JSON key file (in general, don't use JSON key file, except for automated system out of GCP) due to security constraint, key rotation, secure storage,...
For setting the project, simply perform this command gcloud config set project MY_PROJECT. You don't need credential for this.
Since your current question is how to simulate the project ID for local testing:
You should download service account key file from https://console.cloud.google.com/iam-admin/serviceaccounts/project?project=MY_PROJECT, make it accessible inside docker container and activate it via
gcloud auth activate-service-account --key-file my_service_account.json
gcloud config set project MY_PROJECT
This problem may be due to the fact that alpine doesn't have bash:
"/bin/sh" therefore a solution could be to remove the dependency on bash itself by not using bash or by using exec instead of bash.
in my case I solved the problem by using a more complete base image, instead of alpine for instance.
HTH

/usr/lib/x86_64-linux-gnu/libLLVM-4.0.so.1: version `LLVM_4.0' not found

I am trying to run a tool that uses Clang and LLVM. The tool name is cppgrep that is available with the docker. Please find it from the github repository - https://github.com/peter-can-talk/cppnow-2017. I have tried using Ubuntu 16.04 and 17.10, I got the same error as below:
root#522051d201d2:/home# ./cppgrep -help
./cppgrep: /usr/lib/x86_64-linux-gnu/libLLVM-4.0.so.1: version `LLVM_4.0' not found (required by ./cppgrep)
./cppgrep: /usr/lib/x86_64-linux-gnu/libclang-4.0.so.1: version `LLVM_4.0' not found (required by ./cppgrep)
root#522051d201d2:/home#
After some online search, I found that I had to setup the environment variable LD_LIBRARY_PATH. So as a first step I found the library files location in the docker, please find the output below:
root#522051d201d2:/home# find / -iname *libclang*.so*
/usr/lib/x86_64-linux-gnu/libclang-4.0.so
/usr/lib/x86_64-linux-gnu/libclang-4.0.so.1
/usr/lib/llvm-4.0/lib/libclang.so.1
/usr/lib/llvm-4.0/lib/libclang-4.0.so
/usr/lib/llvm-4.0/lib/libclang-4.0.0.so
/usr/lib/llvm-4.0/lib/libclang.so
/usr/lib/llvm-4.0/lib/libclang-4.0.so.1
/usr/lib/llvm-4.0/lib/clang/4.0.0/lib/linux/libclang_rt.dyndd-x86_64.so
/usr/lib/llvm-4.0/lib/clang/4.0.0/lib/linux/libclang_rt.asan-i686.so
/usr/lib/llvm-4.0/lib/clang/4.0.0/lib/linux/libclang_rt.asan-x86_64.so
/usr/lib/llvm-4.0/lib/clang/4.0.0/lib/linux/libclang_rt.asan-i386.so
After this step, I setup the LD_LIBRARY_PATH as follows:
root#522051d201d2:/home# echo $LD_LIBRARY_PATH
/usr/lib:/usr/lib/llvm-4.0/lib/:/usr/lib/x86_64-linux-gnu/
And lastly, I have exported it using the command export LD_LIBRARY_PATH. Now, if I try to run the cppgrep tool, I am still getting the same error. The command to test the tool after building the docker is as follows:
(1) cd into the cppgrep directory, like code/cppgrep,
(2) enter the docker container and mount the folder under /home:
$ docker run -it -v $PWD:/home clang
(3) run cppgrep using ./cppgrep 'x' test.cpp command.
It is suppossed to return functions and variables that has name x.
To replicate the error, after downloading and unzipping the file from github repository, build the docker container using $ docker build -t clang . command. Then follow 1,2,3 steps in the above paragraph.
After couple of days struggle, solved it!!
My initial assumption about the reason for the error is correct. The clang-llvm environment was not available to the cppgrep tool, but I made the mistake in the way of providing the environment information to the cppgrep tool.
The answer has two steps: (1) change the Makefile to point the correct location where you have installed the llvm, in my case, I change the following line in Makefile from HEADERS := -isystem /llvm/include/ to HEADERS := -isystem /usr/lib/llvm-4.0/include/. (2) You have to compile the file again by using the make command, just enter an empty space and save the cppgrep.cpp file before giving the command, otherwise, you will get a message as make: Nothing to be done for 'all'..
That is it, now you should be able to run the cppgrep tool by running ./cppgrep 'x' test.cpp or ./cppgrep -help. For using the other tools in this docker such as ast-dump, mccabe, etc. you have to follow the same above two steps before using them.

Docker hub automated build fails but locally does not

I have setup an automated build on Docker hub here (the sources are here).
The build goes well locally. I have also tried to rebuild it with --no-cache option:
docker build --no-cache .
And the process completes successfully
Successfully built 68b34a5f493a
However, the automated build fails on Docker hub with this error log:
...
Cloning into 'nerdtree'...
[91mVim: Warning: Output is not to a terminal
[0m
[91mVim: Warning: Input is not from a terminal
[0m
[m[m[0m[H[2J[24;1HError detected while processing command line:
E492: Not an editor command: PluginInstall
E492: Not an editor command: GoInstallBinaries
[91mmv: cannot stat `/go/bin/*': No such file or directory
[0m
This build apparently fails on the following vim command:
vim +PluginInstall +GoInstallBinaries +qall
Note that the warnings Output is not to a terminal and Input is not to a terminal appears also in the local build.
I cannot understand how this can happen. I am using a standard Ubuntu 14.04 system.
I finally figured it out. The issue was related to this one.
I am using Docker 1.0 in my host machine, however a later version is in production in Docker Hub. Without the explicit ENV HOME=... line in the Dockerfile, version 1.0 uses / as home directory, while /root is used by the later version. The result is that vim was not able to find its .vimrc file, since it was copied in / instead of /root. The solution I used is to explicitly define ENV HOME=/root in my Dockerfile, so there are no differences between the two versions.

Resources