I'm trying to configure a sawtooth network with at least 2 Validators and some Transaction Processors. I'm using Ubuntu 18.04 so only possible solution is using docker.
I searched entire day for a working example and still no luck. There is an example on the official website here but not working. The docker images versions is 1.1 which is weird, because there is no such version on docker hub. Furthermore it requires an image (hyperledger/sawtooth-poet-engine) which does not exist anywhere.
I know that the main validator should generate the keys and genesis block and the other validator[s] should use that artifacts. But what is the right configuration for second validator? How it can read the generated artifacts from first validator?
Thanks!
This is the config of first validator:
validator-0:
image: hyperledger/sawtooth-validator:1.0
container_name: sawtooth-validator-default-0
expose:
- 4004
ports:
- "4004:4004"
entrypoint: "bash -c \"\
sawadm keygen && \
sawtooth keygen my_key && \
sawset genesis -k /root/.sawtooth/keys/my_key.priv && \
sawadm genesis config-genesis.batch && \
sawtooth-validator -vv \
--endpoint tcp://validator:8800 \
--bind component:tcp://eth0:4004 \
--bind network:tcp://eth0:8800 \
\""
You are using the Sawtooth 1.1 (the unreleased "nightly" build) with the released Sawtooth 1.0 software (the released "latest" build). You have 2 choices:
Follow Sawooth 1.0 documentation and use a 1.0 .yaml file, such as https://sawtooth.hyperledger.org/docs/core/releases/latest/app_developers_guide/docker.html# and https://sawtooth.hyperledger.org/docs/core/releases/latest/app_developers_guide/sawtooth-default.yaml
Upgrade to the "bleeding edge" unreleased Sawtooth 1.1 software at https://sawtooth.hyperledger.org/docs/core/releases/latest/sysadmin_guide/installation.html That is, you use this key and repository:
$ sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys 44FC67F19B2466EA
$ sudo apt-add-repository "deb http://repo.sawtooth.me/ubuntu/nightly xenial universe"
(Note: option 2 does not work yet because the unreleased Sawtooth 1.1 images have not been uploaded to Docker yet and are not available.)
The important point is to use the documentation that matches the release you have installed. Sorry for the confusion.
Related
Step 1/11 : FROM hyperledger/sawtooth-shell:nightly
ERROR: Service 'shell' failed to build: manifest for hyperledger/sawtooth-shell:nightly not found
I am trying to build the supply chain application on linux environment but the build is failing.
The Hyperledger Sawtooth Supply Chain has been modified for the nightly build, 1.2, which is not released yet. What I do is revert to the version that supports the current Sawtooth release, Sawtooth 1.1:
git clone https://github.com/hyperledger/sawtooth-supply-chain
cd sawtooth-supply-chain
git diff 50c404c >bionic.patch
patch --dry-run -R -p1 <bionic.patch
patch -R -p1 <bionic.patch
sudo docker-compose up
Another solution that I have seen but have not tried, are a few Dockerfile tweeks:
diff --git a/shell/Dockerfile b/shell/Dockerfile
index 7ea0caba..b57c2db1 100644
--- a/shell/Dockerfile
+++ b/shell/Dockerfile
## -13,10 +13,10 ##
# limitations under the License.
# ------------------------------------------------------------------------------
-FROM hyperledger/sawtooth-shell:nightly
+FROM hyperledger/sawtooth-shell:bumper-nightly
# Install Python, Node.js, and Ubuntu dependencies
-RUN echo "deb http://repo.sawtooth.me/ubuntu/1.0/stable bionic universe" >> /etc/apt/sources.list \
+RUN echo "deb http://repo.sawtooth.me/ubuntu/1.0/nightly xenial universe" >> /etc/apt/sources.list \
&& (apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys 44FC67F19B2466EA \
|| apt-key adv --keyserver hkp://p80.pool.sks-keyservers.net:80 --recv-keys 44FC67F19B2466EA) \
&& apt-get update \
You can also ask these questions on the Sawtooth Supply Chain chat channel (free registration with The Linux Foundation):
https://chat.hyperledger.org/channel/sawtooth-supply-chain
running Kafka Connect 4.1.1 on DC/OS using the confluent community package. How can we upload or add our jdbc driver to the remote cluster?
Update: It's a package installed DC/OS catalog, which is a mesos framework, running docker images.
Update
Script borrowed from here (thanks to #rmoff)
It's an example of overriding the Docker CMD with a bash script to download and extract the REST API source connector.
bash -c 'echo Installing unzip… && \
curl -so unzip.deb http://ftp.br.debian.org/debian/pool/main/u/unzip/unzip_6.0-16+deb8u3_amd64.deb && \
dpkg -i unzip.deb && \
echo Downloading connector… && \
curl -so kafka-connect-rest.zip https://storage.googleapis.com/rmoff-connectors/kafka-connect-rest.zip && \
mkdir -p /u01/connectors/ && \
unzip -j kafka-connect-rest.zip -d /u01/connectors/kafka-connect-rest && \
echo Launching Connect… && \
/etc/confluent/docker/run'
You'll need to build your own Docker images and publish them to a resolvable Docker Registry for your Mesos cluster, and then edit the Mesos Service to pull these images instead of the Confluent one.
For example, in your Dockerfiles, you would have
ADD http://somepath.com/someJDBC-driver.jar /usr/share/java/kafka-connect-jdbc
Or curl rather than ADD, as shown in the Confluent docs (because it needs to extract that .tar.gz file).
FROM confluentinc/cp-kafka-connect
ENV MYSQL_DRIVER_VERSION 5.1.39
RUN curl -k -SL "https://dev.mysql.com/get/Downloads/Connector-J/mysql-connector-java-${MYSQL_DRIVER_VERSION}.tar.gz" \
| tar -xzf - -C /usr/share/java/kafka-connect-jdbc/ --strip-components=1 mysql-connector-java-5.1.39/mysql-connector-java-${MYSQL_DRIVER_VERSION}-bin.jar
You can also use confluent-hub install to add other connectors that aren't JDBC JAR files
I am new to Hyperledger, I have brief knowledge about Ethereum. I want to create private network using Hyperledger. So, how can we create private network in Hyperledger and deploy the smart contract or chain code to that network ?
And also same thing I want to build using R3 Corda, so is it possible to do inside Corda ?
Can anyone have any reference link or steps for that ?
As Kid101 mentions, you can set up a Corda network by following the instructions in https://docs.corda.net, and in particular https://docs.corda.net/setting-up-a-corda-network.html.
Hyperledger includes multiple blockchain technologies, including Hyperledger Sawtooth.
For Sawtooth, here's a brief summary of the package installation steps and initial setup. This will install the Hyperledger Sawtooth software and setup a blockchain with 1 block (the "genesis" block, block 0):
$ sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys 8AA7AF1F1091A5FD
$ sudo add-apt-repository 'deb http://repo.sawtooth.me/ubuntu/1.0/stable xenial universe'
$ sudo apt-get update
$ sudo apt-get install -y sawtooth
$ sawtooth keygen
$ sawset genesis
$ sudo -u sawtooth sawadm genesis config-genesis.batch
$ sudo sawadm keygen
Full details are at: https://sawtooth.hyperledger.org/docs/core/releases/latest/app_developers_guide/installing_sawtooth.html
There are also admin guides and tutorials to follow. But this will get you started.
I just tried build my test image for Jenkins course and got the issue
+ docker build -t nginx_lamp_app .
/var/jenkins_home/jobs/docker-test/workspace#tmp/durable-d84b5e6a/script.sh: 2: /var/jenkins_home/jobs/docker-test/workspace#tmp/durable-d84b5e6a/script.sh: docker: not found
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
ERROR: script returned exit code 127
Finished: FAILURE
But I've already configured docker socket in docker-compose file for Jenkins, like this
version: "2"
services:
jenkins:
image: "jenkins/jenkins:lts"
ports:
- "8080:8080"
restart: "always"
volumes:
- "/var/jenkins_home:/var/jenkins_home"
- "/var/run/docker.sock:/var/run/docker.sock"
But, when I attach to container I see also "docker: not found" when I type command "docker"...
And I've changed permissions to socket like 777
What's can be wrong?
Thanks!
You are trying to achieve a Docker-in-Docker kind of thing. Mounting just the docker socket will not make it working as you expect. You need to install docker binary into it as well. You can do this by either extending your jenkins image/Dockerfile or create(docker commit) a new image after installing docker binary into it & use that image for your CI/CD. Try to integrate below RUN statement with the extended Dockerfile or the container to be committed(should work on ubuntu docker image) -
RUN apt-get update && \
apt-get -y install apt-transport-https \
ca-certificates \
curl \
gnupg2 \
software-properties-common && \
curl -fsSL https://download.docker.com/linux/$(. /etc/os-release; echo "$ID")/gpg > /tmp/dkey; apt-key add /tmp/dkey && \
add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/$(. /etc/os-release; echo "$ID") \
$(lsb_release -cs) \
stable" && \
apt-get update && \
apt-get -y install docker-ce
Ref - https://github.com/jpetazzo/dind
PS - It isn't really recommended (http://jpetazzo.github.io/2015/09/03/do-not-use-docker-in-docker-for-ci/)
Adding to that, you shouldn't mount host docker binary inside the container -
⚠️ Former versions of this post advised to bind-mount the docker
binary from the host to the container. This is not reliable anymore,
because the Docker Engine is no longer distributed as (almost) static
libraries.
I want to create a docker container to load the MEAN stack (Mongo - Node specifically). From my understanding I can't use multiple FROM statements in my Dockerfile, what is the easiest way to setup both Node and Mongo on a docker image?
Do i do this,
FROM node:0.10.40
RUN <whatever the mongo install command is>
or this,
FROM mongo:2.6.11
RUN <whatever the npm install command is>
or something else?
Look at the Dockerfiles backing these sources!
If they're both FROM comparable sources (ie. ubuntu), then you should be able to take the mongo dockerfile and modify it to go FROM the node image, thus generating an image with both services available.
Thus, amending the mongo:2.6.11 dockerfile:
FROM node:0.10.40
RUN groupadd -r mongodb && useradd -r -g mongodb mongodb
RUN apt-get update \
&& apt-get install -y --no-install-recommends \
ca-certificates curl \
numactl \
&& rm -rf /var/lib/apt/lists/*
# grab gosu for easy step-down from root
RUN gpg --keyserver ha.pool.sks-keyservers.net --recv-keys B42F6819007F00F88E364FD4036A9C25BF357DD4
RUN curl -o /usr/local/bin/gosu -SL "https://github.com/tianon/gosu/releases/download/1.6/gosu-$(dpkg --print-architecture)" \
&& curl -o /usr/local/bin/gosu.asc -SL "https://github.com/tianon/gosu/releases/download/1.6/gosu-$(dpkg --print-architecture).asc" \
&& gpg --verify /usr/local/bin/gosu.asc \
&& rm /usr/local/bin/gosu.asc \
&& chmod +x /usr/local/bin/gosu
RUN gpg --keyserver ha.pool.sks-keyservers.net --recv-keys DFFA3DCF326E302C4787673A01C4E7FAAAB2461C
ENV MONGO_VERSION 2.6.11
RUN curl -SL "https://fastdl.mongodb.org/linux/mongodb-linux-x86_64-$MONGO_VERSION.tgz" -o mongo.tgz \
&& curl -SL "https://fastdl.mongodb.org/linux/mongodb-linux-x86_64-$MONGO_VERSION.tgz.sig" -o mongo.tgz.sig \
&& gpg --verify mongo.tgz.sig \
&& tar -xvf mongo.tgz -C /usr/local --strip-components=1 \
&& rm mongo.tgz*
VOLUME /data/db
COPY docker-entrypoint.sh /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"]
EXPOSE 27017
...of course, you'll need to amend the entry point to run both services, if you were to do this.
However: Don't do this at all! The best-practices approach is to have multiple containers, one for each service, rather than building only one container that runs all services involved in your stack. Keeping your components each in their own, sandboxed namespace in this way reduces complexity in several respects: There's less room for security breaches to cross containers; there's less interdependence between containers (a software update needed for a new release of node won't break mongodb or the inverse); your containers don't need an init system or other components related to supervising multiple services; etc.
See the Container Linking documentation on the Docker website to understand how to configure your containers to be able to communicate.