Binomial heap sibling - linked list reversal - linked-list

I don't understand the reversal of the list of a node l in a binomial heap:
Node* reverseList(Node* l) {
//return reverseNode(l);
if(l->sibling)
{
return reverseList(l->sibling);
l->sibling->sibling = l;
}
}
What does this mean? :
l->sibling->sibling = l;
The parent?

A return statement ends the execution of the function, so you are asking about dead code.
I would expect the function to actually be like this:
Node* reverseList(Node* l) {
if (l->sibling)
{
Node* head = reverseList(l->sibling);
l->sibling->sibling = l;
l->sibling = NULL;
return head;
}
return l;
}
To visualise this, let an example linked list consist of three nodes:
l
↓
┌───────────────┐ ┌───────────────┐ ┌───────────────┐
│ sibling: ─────────► │ sibling: ─────────► │ sibling: NULL │
│ │ │ │ │ │
└───────────────┘ └───────────────┘ └───────────────┘
When the function is called, we get into the if and make a recursive call. That new (second) execution context has its own local variables, and to distinguish them, I will add an accent to their names. So we have another l' variable:
l l'
↓ ↓
┌───────────────┐ ┌───────────────┐ ┌───────────────┐
│ sibling: ─────────► │ sibling: ─────────► │ sibling: NULL │
│ │ │ │ │ │
└───────────────┘ └───────────────┘ └───────────────┘
Also that function's execution will make a recursive call:
l l' l"
↓ ↓ ↓
┌───────────────┐ ┌───────────────┐ ┌───────────────┐
│ sibling: ─────────► │ sibling: ─────────► │ sibling: NULL │
│ │ │ │ │ │
└───────────────┘ └───────────────┘ └───────────────┘
The latest (third) execution of the function gets l'->sibling as argument and will assign that to its own local variable l". It will find that l"->sibling is NULL, and so it just returns the same pointer without making any alteration. At this moment the lifetime of the variable l" ends. The caller assigns the returned value to a local head' variable -- again the accent to make clear this happens in the second execution context:
l l'
↓ ↓
┌───────────────┐ ┌───────────────┐ ┌───────────────┐
│ sibling: ─────────► │ sibling: ─────────► │ sibling: NULL │
│ │ │ │ │ │
└───────────────┘ └───────────────┘ └───────────────┘
↑
head'
Now we get to the statement: l'->sibling->sibling = l'. That means an assignment is made to the sibling member of the last node, and so we get:
l l'
↓ ↓
┌───────────────┐ ┌───────────────┐ ┌───────────────┐
│ sibling: ─────────► │ sibling: ─────────► │ sibling: ─┐ │
│ │ │ │ ◄───────────────┘ │
└───────────────┘ └───────────────┘ └───────────────┘
↑
head'
Then we execute l'->sibling = NULL:
l
↓
┌───────────────┐ ┌───────────────┐ ┌───────────────┐
│ sibling: ─────────► │ sibling: NULL │ │ sibling: ─┐ │
│ │ │ │ ◄───────────────┘ │
└───────────────┘ └───────────────┘ └───────────────┘
↑
head'
Then we execute return head'. The variables of the second execution context end their lives (no more accents). The first execution context will assign the returned pointer to its own head variable:
l
↓
┌───────────────┐ ┌───────────────┐ ┌───────────────┐
│ sibling: ─────────► │ sibling: NULL │ │ sibling: ─┐ │
│ │ │ │ ◄───────────────┘ │
└───────────────┘ └───────────────┘ └───────────────┘
↑
head
Now we get to the statement: l->sibling->sibling = l. That means an assignment is made to the sibling member of the middle node, and so we get:
l
↓
┌───────────────┐ ┌───────────────┐ ┌───────────────┐
│ sibling: ─────────► │ sibling: ─┐ │ │ sibling: ─┐ │
│ │ ◄───────────────┘ │ ◄───────────────┘ │
└───────────────┘ └───────────────┘ └───────────────┘
↑
head
We execute l->sibling = NULL:
l
↓
┌───────────────┐ ┌───────────────┐ ┌───────────────┐
│ sibling: NULL │ │ sibling: ─┐ │ │ sibling: ─┐ │
│ │ ◄───────────────┘ │ ◄───────────────┘ │
└───────────────┘ └───────────────┘ └───────────────┘
↑
head
And finally, we return head. The local variables end their lifetimes, and so only the returned pointer is relevant:
┌───────────────┐ ┌───────────────┐ ┌───────────────┐
│ sibling: NULL │ │ sibling: ─┐ │ │ sibling: ─┐ │
│ │ ◄───────────────┘ │ ◄───────────────┘ │
└───────────────┘ └───────────────┘ └───────────────┘
↑
(returned)
You can see that the returned pointer is indeed referencing the reversed list.

The code in the question is incorrect, this is the correct code
int RevertList(Node *h){
if (h->sibling != NULL){
RevertList(h->sibling);
(h->sibling)->sibling = h;
}
else
root = h;
}
RevertList is a helper-function used when a node is deleted from a Binomial Heap.
When a node is deleted, it’s child and their siblings are detached from the Binomial Heap Structure. The RevertList function reverses the order of the detached children, so they can be union-ed to the root list, in the correct order.
Take a look at this code for better understanding of how this works!
Below is an example, from the CLRS Textbook

Related

how to reduce clickhouse memory usage or release memory manually

i'm new about clickhouse, i have about 3 billion data in clickhouse 64G ram.
INSERT INTO a_table SELECT * FORM src_table WHERE create_time >= ? AND create_time <= ?
When executing this sql, code 241 is randomly returned
SELECT
*,
formatReadableSize(value) AS b
FROM system.asynchronous_metrics
WHERE metric LIKE '%em%'
ORDER BY b DESC
┌─metric───────────────────────────────────┬─────────────────value─┬─b──────────┐
│ FilesystemLogsPathTotalBytes │ 105150078976 │ 97.93 GiB │
│ FilesystemMainPathUsedBytes │ 889244708864 │ 828.17 GiB │
│ MemoryVirtual │ 86271520768 │ 80.35 GiB │
│ MemoryDataAndStack │ 81446019072 │ 75.85 GiB │
│ jemalloc.epoch │ 7976 │ 7.79 KiB │
│ OSMemoryFreePlusCached │ 7745302528 │ 7.21 GiB │
│ OSMemoryTotal │ 66174210048 │ 61.63 GiB │
│ OSMemoryAvailable │ 6853541888 │ 6.38 GiB │
│ FilesystemLogsPathTotalINodes │ 6553600 │ 6.25 MiB │
│ FilesystemLogsPathAvailableINodes │ 6397780 │ 6.10 MiB │
│ jemalloc.arenas.all.dirty_purged │ 625232220 │ 596.27 MiB │
│ FilesystemLogsPathAvailableBytes │ 61849862144 │ 57.60 GiB │
│ jemalloc.mapped │ 58842886144 │ 54.80 GiB │
│ jemalloc.resident │ 58749423616 │ 54.71 GiB │
│ MemoryResident │ 58665074688 │ 54.64 GiB │
│ jemalloc.active │ 58473361408 │ 54.46 GiB │
│ jemalloc.allocated │ 57016602472 │ 53.10 GiB │
│ jemalloc.arenas.all.muzzy_purged │ 483548433 │ 461.15 MiB │
│ FilesystemLogsPathUsedBytes │ 43300216832 │ 40.33 GiB │
│ OSMemoryCached │ 4472168448 │ 4.17 GiB │
│ MemoryCode │ 366669824 │ 349.68 MiB │
│ jemalloc.arenas.all.pdirty │ 3830 │ 3.74 KiB │
│ OSMemoryFreeWithoutCached │ 3273134080 │ 3.05 GiB │
│ jemalloc.metadata │ 262799792 │ 250.63 MiB │
│ MemoryShared │ 253267968 │ 241.54 MiB │
│ FilesystemMainPathUsedINodes │ 215867 │ 210.81 KiB │
│ FilesystemMainPathTotalBytes │ 3170529116160 │ 2.88 TiB │
│ FilesystemMainPathAvailableBytes │ 2281284407296 │ 2.07 TiB │
│ FilesystemMainPathTotalINodes │ 196608000 │ 187.50 MiB │
│ FilesystemMainPathAvailableINodes │ 196392133 │ 187.29 MiB │
│ jemalloc.retained │ 19604230144 │ 18.26 GiB │
│ FilesystemLogsPathUsedINodes │ 155820 │ 152.17 KiB │
│ jemalloc.arenas.all.pactive │ 14275723 │ 13.61 MiB │
│ jemalloc.arenas.all.pmuzzy │ 1394 │ 1.36 KiB │
└──────────────────────────────────────────┴───────────────────────┴────────────┘
Can I release memory by manually ?

How to create multiple containers in same pods which have separate deployment.yaml files?

tldr: in docker-compose, intercontainer communication is possible via localhost. I want to do the same in k8s, however, I have separate deployment.yaml files for each component. How to link them ?
I have a kubernetes helm package in which there are sub helm packages. The folder structure is as follows ::
A
├── Chart.yaml
├── values.yaml
├── charts
│ ├── component1
│ │ ├── Chart.yaml
│ │ ├── templates
│ │ │ ├── configmap.yaml
│ │ │ ├── deployment.yaml
│ │ │ ├── hpa.yaml
│ │ │ ├── ingress.yaml
│ │ │ ├── service.yaml
│ │ │ ├── serviceaccount.yaml
│ │ └── values.yaml
│ ├── component2
│ │ ├── Chart.yaml
│ │ ├── templates
│ │ │ ├── certs.yaml
│ │ │ ├── configmap.yaml
│ │ │ ├── pdb.yaml
│ │ │ ├── role.yaml
│ │ │ ├── statefulset.yaml
│ │ │ ├── pvc.yaml
│ │ │ └── svc.yaml
│ │ ├── values-production.yaml
│ │ └── values.yaml
In docker-compose, I was able to communicate between component1 and component2 via ports using localhost.
However, in this architecture, I have separate deployment.yaml files for those components. I know that if I keep them as containers in a single deployment.yaml file, I can communicate via localhost.
Question: How do I put these containers in same pod, provided that they are present in separate deployment.yaml files ?
That's not possible. Pods are the smallest deployable unit in kubernetes that consist of one or more containers. All containers inside the pod share the same network namespace (beside others). The containers can only be reached via fqdn or ip. For each container outside a pod "localhost" means something completely different. Similar to running docker compose on different hosts, they can not connect using localhost.
You can use the service's name to have a similar behaviour. Instead of calling http://localhost:8080 you can simple use http://component1:8080 to reach component1 from component2, supposing the service in component1/templates/service.yaml is named component1 and both are in the same namespace. Generally there is a dns record for every service with the schema <service>.<namespace>, e.g. component1.default for component1 running in the default namespace. If component2 where in a different namespace you would use http://component1.default:8080.

How to setup pm2-logrotate for docker with nodejs running pm2?

I have the docker image from keymetrics/pm2:8-jessie and running my nodejs application well with pm2. I tried to add pm2-logrotate for sizing the log with date. I added the following in my Dockerfile. The module pm2-logrotate can be started but the Target PID is null. Anyone can help please?
FROM keymetrics/pm2:8-jessie
RUN npm install
RUN pm2 install pm2-logrotate
RUN pm2 set pm2-logrotate:retain 90
RUN pm2 set pm2-logrotate:dateFormat YYYY-MM-DD_HH-mm-ss
RUN pm2 set pm2-logrotate:max_size 10M
RUN pm2 set pm2-logrotate:rotateInterval 0 0 * * *
RUN pm2 set pm2-logrotate:rotateModule true
RUN pm2 set pm2-logrotate:workerInterval 10
ENV NODE_ENV=$buildenv
ENV NPM_CONFIG_LOGLEVEL warn
CMD ["sh", "-c", "pm2-runtime start pm2.${NODE_ENV}.config.js"]
pm2 ls
┌──────────────┬────┬─────────┬─────────┬─────┬────────┬─────────┬────────┬─────┬────────────┬──────┬──────────┐
│ App name │ id │ version │ mode │ pid │ status │ restart │ uptime │ cpu │ mem │ user │ watching │
├──────────────┼────┼─────────┼─────────┼─────┼────────┼─────────┼────────┼─────┼────────────┼──────┼──────────┤
│ app_server │ 1 │ 1.0.0 │ cluster │ 150 │ online │ 1 │ 2h │ 0% │ 104.4 MB │ root │ disabled │
└──────────────┴────┴─────────┴─────────┴─────┴────────┴─────────┴────────┴─────┴────────────┴──────┴──────────┘
Module
┌───────────────┬────┬─────────┬─────┬────────┬─────────┬─────┬───────────┬──────┐
│ Module │ id │ version │ pid │ status │ restart │ cpu │ memory │ user │
├───────────────┼────┼─────────┼─────┼────────┼─────────┼─────┼───────────┼──────┤
│ pm2-logrotate │ 2 │ 2.7.0 │ 205 │ online │ 0 │ 0% │ 44.5 MB │ root │
└───────────────┴────┴─────────┴─────┴────────┴─────────┴─────┴───────────┴──────┘
One reason is as pm2 logrotate is not the primary process of the Docker container, but a managed process by pm2, so you can verify this behaviour by stopping main process that is defined pm2.${NODE_ENV}.config.js your container will die no matter if pm2-logrotate is running.
Also, I do not think it should be null, it should be something like
pm2 ls
┌─────┬──────────────────┬─────────────┬─────────┬─────────┬──────────┬────────┬──────┬───────────┬──────────┬──────────┬──────────┬──────────┐
│ id │ name │ namespace │ version │ mode │ pid │ uptime │ ↺ │ status │ cpu │ mem │ user │ watching │
├─────┼──────────────────┼─────────────┼─────────┼─────────┼──────────┼────────┼──────┼───────────┼──────────┼──────────┼──────────┼──────────┤
│ 1 │ www │ default │ 0.0.0 │ fork │ 26 │ 13s │ 0 │ online │ 0% │ 40.3mb │ root │ disabled │
└─────┴──────────────────┴─────────────┴─────────┴─────────┴──────────┴────────┴──────┴───────────┴──────────┴──────────┴──────────┴──────────┘
Module
┌────┬───────────────────────────────────────┬────────────────────┬───────┬──────────┬──────┬──────────┬──────────┬──────────┐
│ id │ module │ version │ pid │ status │ ↺ │ cpu │ mem │ user │
├────┼───────────────────────────────────────┼────────────────────┼───────┼──────────┼──────┼──────────┼──────────┼──────────┤
│ 0 │ pm2-logrotate │ 2.7.0 │ 17 │ online │ 0 │ 0.5% │ 43.1mb │ root │
└────┴───────────────────────────────────────┴────────────────────┴───────┴──────────┴──────┴──────────┴──────────┴──────────┘
Also will suggest to use the alpine base image as the above image seem very heavy, the below image is 150MB while the above image is arround 1GB.
FROM node:alpine
RUN npm install pm2 -g
RUN npm install
RUN pm2 install pm2-logrotate
RUN pm2 set pm2-logrotate:retain 90
RUN pm2 set pm2-logrotate:dateFormat YYYY-MM-DD_HH-mm-ss
RUN pm2 set pm2-logrotate:max_size 10M
RUN pm2 set pm2-logrotate:rotateInterval 0 0 * * *
RUN pm2 set pm2-logrotate:rotateModule true
RUN pm2 set pm2-logrotate:workerInterval 10
ENV NODE_ENV=$buildenv
ENV NPM_CONFIG_LOGLEVEL warn
WORKDIR /app
COPY . /app
CMD ["sh", "-c", "pm2-runtime start confi"]

Unable to connect second org peer to channel in HLF

1 I am following below link to setup my first network on Hyperledger Fabric http://hyperledger-fabric.readthedocs.io/en/latest/build_network.html
Whatever steps are mentioned in this setup i am pretty much able to do all.My all docker container working good The issue is than when I try to join other peers of second org. to the channel, using below
"Join peer0.dfarmretail.com to the channel."
docker exec -e "CORE_PEER_LOCALMSPID=DfarmretailMSP" -e "CORE_PEER_MSPCONFIGPATH=/etc/hyperledger/msp/users/Admin#dfarmretail.com/msp" peer0.dfarmretail.com peer channel join -o orderer.dfarmadmin.com:7050 -b dfarmchannel.block
however I am getting below error
error: error getting endorser client for channel: endorser client failed to connect to peer0.dfarmretail.com:8051: failed to create new connection: connection error: desc = "transport: error while dialing: dial tcp 172.20.0.6:8051: connect: connection refused"
Please see below files
my Docker-composer.yaml
version: '2'
networks:
dfarm:
services:
ca.dfarmadmin.com:
image: hyperledger/fabric-ca
environment:
- FABRIC_CA_HOME=/etc/hyperledger/fabric-ca-server
- FABRIC_CA_SERVER_CA_NAME=ca.dfarmadmin.com
- FABRIC_CA_SERVER_CA_CERTFILE=/etc/hyperledger/fabric-ca-server-config/ca.dfarmadmin.com-cert.pem
- FABRIC_CA_SERVER_CA_KEYFILE=/etc/hyperledger/fabric-ca-server-config/ad62c9f5133ad87c5f94d6b3175eb059395b5f68caf43e439e6bb7d42d8296e4_sk
ports:
- "7054:7054"
command: sh -c 'fabric-ca-server start -b admin:adminpw -d'
volumes:
- ./crypto-config/peerOrganizations/dfarmadmin.com/ca/:/etc/hyperledger/fabric-ca-server-config
container_name: ca.dfarmadmin.com
networks:
- dfarm
orderer.dfarmadmin.com:
container_name: orderer.dfarmadmin.com
image: hyperledger/fabric-orderer
environment:
- ORDERER_GENERAL_LOGLEVEL=debug
- ORDERER_GENERAL_LISTENADDRESS=0.0.0.0
- ORDERER_GENERAL_GENESISMETHOD=file
- ORDERER_GENERAL_GENESISFILE=/etc/hyperledger/configtx/genesis.block
- ORDERER_GENERAL_LOCALMSPID=OrdererMSP
- ORDERER_GENERAL_LOCALMSPDIR=/etc/hyperledger/msp/orderer/msp
working_dir: /opt/gopath/src/github.com/hyperledger/fabric/orderer
command: orderer
ports:
- 7050:7050
volumes:
- ./config/:/etc/hyperledger/configtx
- ./crypto-config/ordererOrganizations/dfarmadmin.com/orderers/orderer.dfarmadmin.com/:/etc/hyperledger/msp/orderer
- ./crypto-config/peerOrganizations/dfarmadmin.com/peers/peer0.dfarmadmin.com/:/etc/hyperledger/msp/peerDfarmadmin
networks:
- dfarm
peer0.dfarmadmin.com:
container_name: peer0.dfarmadmin.com
image: hyperledger/fabric-peer
environment:
- CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
- CORE_PEER_ID=peer0.dfarmadmin.com
- CORE_LOGGING_PEER=debug
- CORE_CHAINCODE_LOGGING_LEVEL=DEBUG
- CORE_PEER_LOCALMSPID=DfarmadminMSP
- CORE_PEER_MSPCONFIGPATH=/etc/hyperledger/msp/peer/
- CORE_PEER_ADDRESS=peer0.dfarmadmin.com:7051
- CORE_PEER_GOSSIP_EXTERNALENDPOINT=peer0.dfarmadmin.com:7051
# # the following setting starts chaincode containers on the same
# # bridge network as the peers
# # https://docs.docker.com/compose/networking/
- CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=${COMPOSE_PROJECT_NAME}_dfarm
- CORE_LEDGER_STATE_STATEDATABASE=CouchDB
- CORE_LEDGER_STATE_COUCHDBCONFIG_COUCHDBADDRESS=couchdb:5984
# The CORE_LEDGER_STATE_COUCHDBCONFIG_USERNAME and CORE_LEDGER_STATE_COUCHDBCONFIG_PASSWORD
# provide the credentials for ledger to connect to CouchDB. The username and password must
# match the username and password set for the associated CouchDB.
- CORE_LEDGER_STATE_COUCHDBCONFIG_USERNAME=
- CORE_LEDGER_STATE_COUCHDBCONFIG_PASSWORD=
working_dir: /opt/gopath/src/github.com/hyperledger/fabric
command: peer node start
# command: peer node start --peer-chaincodedev=true
ports:
- 7051:7051
- 7053:7053
volumes:
- /var/run/:/host/var/run/
- ./crypto-config/peerOrganizations/dfarmadmin.com/peers/peer0.dfarmadmin.com/msp:/etc/hyperledger/msp/peer
- ./crypto-config/peerOrganizations/dfarmadmin.com/users:/etc/hyperledger/msp/users
- ./config:/etc/hyperledger/configtx
depends_on:
- orderer.dfarmadmin.com
- couchdb
networks:
- dfarm
peer0.dfarmretail.com:
container_name: peer0.dfarmretail.com
image: hyperledger/fabric-peer
environment:
- CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
- CORE_PEER_ID=peer0.dfarmretail.com
- CORE_LOGGING_PEER=debug
- CORE_CHAINCODE_LOGGING_LEVEL=DEBUG
- CORE_PEER_LOCALMSPID=DfarmretailMSP
- CORE_PEER_MSPCONFIGPATH=/etc/hyperledger/msp/peer/
- CORE_PEER_ADDRESS=peer0.dfarmretail.com:8051
- CORE_PEER_GOSSIP_EXTERNALENDPOINT=peer0.dfarmretail.com:8051
# # the following setting starts chaincode containers on the same
# # bridge network as the peers
# # https://docs.docker.com/compose/networking/
- CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=${COMPOSE_PROJECT_NAME}_dfarm
- CORE_LEDGER_STATE_STATEDATABASE=CouchDB
- CORE_LEDGER_STATE_COUCHDBCONFIG_COUCHDBADDRESS=couchdb:5984
# The CORE_LEDGER_STATE_COUCHDBCONFIG_USERNAME and CORE_LEDGER_STATE_COUCHDBCONFIG_PASSWORD
# provide the credentials for ledger to connect to CouchDB. The username and password must
# match the username and password set for the associated CouchDB.
- CORE_LEDGER_STATE_COUCHDBCONFIG_USERNAME=
- CORE_LEDGER_STATE_COUCHDBCONFIG_PASSWORD=
working_dir: /opt/gopath/src/github.com/hyperledger/fabric
command: peer node start
# command: peer node start --peer-chaincodedev=true
ports:
- 8051:8051
- 8053:8053
volumes:
- /var/run/:/host/var/run/
- ./crypto-config/peerOrganizations/dfarmretail.com/peers/peer0.dfarmretail.com/msp:/etc/hyperledger/msp/peer
- ./crypto-config/peerOrganizations/dfarmretail.com/users:/etc/hyperledger/msp/users
- ./config:/etc/hyperledger/configtx
depends_on:
- orderer.dfarmadmin.com
- couchdb
networks:
- dfarm
couchdb:
container_name: couchdb
image: hyperledger/fabric-couchdb
# Populate the COUCHDB_USER and COUCHDB_PASSWORD to set an admin user and password
# for CouchDB. This will prevent CouchDB from operating in an "Admin Party" mode.
environment:
- COUCHDB_USER=
- COUCHDB_PASSWORD=
ports:
- 5984:5984
networks:
- dfarm
cli:
container_name: cli
image: hyperledger/fabric-tools
tty: true
stdin_open: true
environment:
- GOPATH=/opt/gopath
- CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
- CORE_LOGGING_LEVEL=DEBUG
- CORE_PEER_ID=cli
- CORE_PEER_ADDRESS=peer0.dfarmadmin.com:7051
- CORE_PEER_LOCALMSPID=DfarmadminMSP
- CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/dfarmadmin.com/users/Admin#dfarmadmin.com/msp
- CORE_CHAINCODE_KEEPALIVE=10
working_dir: /opt/gopath/src/github.com/hyperledger/fabric/peer
command: /bin/bash
volumes:
- /var/run/:/host/var/run/
- ./../chaincode/:/opt/gopath/src/github.com/
- ./crypto-config:/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/
networks:
- dfarm
depends_on:
- orderer.dfarmadmin.com
- peer0.dfarmadmin.com
- peer0.dfarmretail.com
- couchdb
my start.sh
#!/bin/bash
#
# Exit on first error, print all commands.
set -ev
# don't rewrite paths for Windows Git Bash users
export MSYS_NO_PATHCONV=1
FABRIC_START_TIMEOUT=90
docker-compose -f docker-compose.yml down
docker-compose -f docker-compose.yml up -d ca.dfarmadmin.com orderer.dfarmadmin.com peer0.dfarmadmin.com peer0.dfarmretail.com couchdb
# wait for Hyperledger Fabric to start
# incase of errors when running later commands, issue export FABRIC_START_TIMEOUT=<larger number>
export FABRIC_START_TIMEOUT=10
#echo ${FABRIC_START_TIMEOUT}
sleep ${FABRIC_START_TIMEOUT}
# Create the channel
docker exec -e "CORE_PEER_LOCALMSPID=DfarmadminMSP" -e "CORE_PEER_MSPCONFIGPATH=/etc/hyperledger/msp/users/Admin#dfarmadmin.com/msp" peer0.dfarmadmin.com peer channel create -o orderer.dfarmadmin.com:7050 -c dfarmchannel -f /etc/hyperledger/configtx/channel.tx
# Join peer0.dfarmadmin.com to the channel.
docker exec -e "CORE_PEER_LOCALMSPID=DfarmadminMSP" -e "CORE_PEER_MSPCONFIGPATH=/etc/hyperledger/msp/users/Admin#dfarmadmin.com/msp" peer0.dfarmadmin.com peer channel join -b dfarmchannel.block
# Join peer0.dfarmretail.com to the channel.
docker exec -e "CORE_PEER_LOCALMSPID=DfarmretailMSP" -e "CORE_PEER_MSPCONFIGPATH=/etc/hyperledger/msp/users/Admin#dfarmretail.com/msp" peer0.dfarmretail.com peer channel join -o orderer.dfarmadmin.com:7050 -b dfarmchannel.block
its my project folder structure.
├── config
│ ├── channel.tx
│ ├── DfarmadminMSPanchors.tx
│ ├── DfarmretailMSPanchors.tx
│ └── genesis.block
├── configtx.yaml
├── crypto-config
│ ├── 1
│ ├── ordererOrganizations
│ │ └── dfarmadmin.com
│ │ ├── ca
│ │ │ ├── 5f0077f4811e16e3bac8b64ae22e35bd52f3205538587e0a52eaa49e86b57c4c_sk
│ │ │ └── ca.dfarmadmin.com-cert.pem
│ │ ├── msp
│ │ │ ├── admincerts
│ │ │ │ └── Admin#dfarmadmin.com-cert.pem
│ │ │ ├── cacerts
│ │ │ │ └── ca.dfarmadmin.com-cert.pem
│ │ │ └── tlscacerts
│ │ │ └── tlsca.dfarmadmin.com-cert.pem
│ │ ├── orderers
│ │ │ └── orderer.dfarmadmin.com
│ │ │ ├── msp
│ │ │ │ ├── admincerts
│ │ │ │ │ └── Admin#dfarmadmin.com-cert.pem
│ │ │ │ ├── cacerts
│ │ │ │ │ └── ca.dfarmadmin.com-cert.pem
│ │ │ │ ├── keystore
│ │ │ │ │ └── ecda7305295e86d0890aea73874c80c21a9b29dc04435ef521f1025194a366c8_sk
│ │ │ │ ├── signcerts
│ │ │ │ │ └── orderer.dfarmadmin.com-cert.pem
│ │ │ │ └── tlscacerts
│ │ │ │ └── tlsca.dfarmadmin.com-cert.pem
│ │ │ └── tls
│ │ │ ├── ca.crt
│ │ │ ├── server.crt
│ │ │ └── server.key
│ │ ├── tlsca
│ │ │ ├── 199db47c8e231c6cff329e1fdfa8b522ef7b74847808f61045057b56498f49fd_sk
│ │ │ └── tlsca.dfarmadmin.com-cert.pem
│ │ └── users
│ │ └── Admin#dfarmadmin.com
│ │ ├── msp
│ │ │ ├── admincerts
│ │ │ │ └── Admin#dfarmadmin.com-cert.pem
│ │ │ ├── cacerts
│ │ │ │ └── ca.dfarmadmin.com-cert.pem
│ │ │ ├── keystore
│ │ │ │ └── 0c5004c87035e89b735940b5b446d59d138c1af8f42b73980c7d7b03373ee333_sk
│ │ │ ├── signcerts
│ │ │ │ └── Admin#dfarmadmin.com-cert.pem
│ │ │ └── tlscacerts
│ │ │ └── tlsca.dfarmadmin.com-cert.pem
│ │ └── tls
│ │ ├── ca.crt
│ │ ├── client.crt
│ │ └── client.key
│ └── peerOrganizations
│ ├── dfarmadmin.com
│ │ ├── ca
│ │ │ ├── ad62c9f5133ad87c5f94d6b3175eb059395b5f68caf43e439e6bb7d42d8296e4_sk
│ │ │ └── ca.dfarmadmin.com-cert.pem
│ │ ├── msp
│ │ │ ├── admincerts
│ │ │ │ └── Admin#dfarmadmin.com-cert.pem
│ │ │ ├── cacerts
│ │ │ │ └── ca.dfarmadmin.com-cert.pem
│ │ │ └── tlscacerts
│ │ │ └── tlsca.dfarmadmin.com-cert.pem
│ │ ├── peers
│ │ │ ├── peer0.dfarmadmin.com
│ │ │ │ ├── msp
│ │ │ │ │ ├── admincerts
│ │ │ │ │ │ └── Admin#dfarmadmin.com-cert.pem
│ │ │ │ │ ├── cacerts
│ │ │ │ │ │ └── ca.dfarmadmin.com-cert.pem
│ │ │ │ │ ├── keystore
│ │ │ │ │ │ └── 66f1271392ea3ce4d3548e91ee5620591e79e538e36a69b38786b3f11f3c53e2_sk
│ │ │ │ │ ├── signcerts
│ │ │ │ │ │ └── peer0.dfarmadmin.com-cert.pem
│ │ │ │ │ └── tlscacerts
│ │ │ │ │ └── tlsca.dfarmadmin.com-cert.pem
│ │ │ │ └── tls
│ │ │ │ ├── ca.crt
│ │ │ │ ├── server.crt
│ │ │ │ └── server.key
│ │ │ └── peer0.dfarmretail.com
│ │ │ └── msp
│ │ │ └── keystore
│ │ ├── tlsca
│ │ │ ├── f6f49b0ff9c7f850e5f655dfbb88ce7b8c07f3f872d151346ac65c6f5f2ef80d_sk
│ │ │ └── tlsca.dfarmadmin.com-cert.pem
│ │ └── users
│ │ ├── Admin#dfarmadmin.com
│ │ │ ├── msp
│ │ │ │ ├── admincerts
│ │ │ │ │ └── Admin#dfarmadmin.com-cert.pem
│ │ │ │ ├── cacerts
│ │ │ │ │ └── ca.dfarmadmin.com-cert.pem
│ │ │ │ ├── keystore
│ │ │ │ │ └── 9c65737a78159bf977b9e38299c9c8e02278f76c3d4650caf32a4da845947547_sk
│ │ │ │ ├── signcerts
│ │ │ │ │ └── Admin#dfarmadmin.com-cert.pem
│ │ │ │ └── tlscacerts
│ │ │ │ └── tlsca.dfarmadmin.com-cert.pem
│ │ │ └── tls
│ │ │ ├── ca.crt
│ │ │ ├── client.crt
│ │ │ └── client.key
│ │ └── User1#dfarmadmin.com
│ │ ├── msp
│ │ │ ├── admincerts
│ │ │ │ └── User1#dfarmadmin.com-cert.pem
│ │ │ ├── cacerts
│ │ │ │ └── ca.dfarmadmin.com-cert.pem
│ │ │ ├── keystore
│ │ │ │ └── 458f1f699493828d88507fabb9ad2dab4fc2cc8acdaf4aa65c1fda12710227dd_sk
│ │ │ ├── signcerts
│ │ │ │ └── User1#dfarmadmin.com-cert.pem
│ │ │ └── tlscacerts
│ │ │ └── tlsca.dfarmadmin.com-cert.pem
│ │ └── tls
│ │ ├── ca.crt
│ │ ├── client.crt
│ │ └── client.key
│ └── dfarmretail.com
│ ├── ca
│ │ ├── 8f839598652d94f6ab6cb3d0f15390df5fe8dd7b6bb88c5c3b75205b975bc8d2_sk
│ │ └── ca.dfarmretail.com-cert.pem
│ ├── msp
│ │ ├── admincerts
│ │ │ └── Admin#dfarmretail.com-cert.pem
│ │ ├── cacerts
│ │ │ └── ca.dfarmretail.com-cert.pem
│ │ └── tlscacerts
│ │ └── tlsca.dfarmretail.com-cert.pem
│ ├── peers
│ │ └── peer0.dfarmretail.com
│ │ ├── msp
│ │ │ ├── admincerts
│ │ │ │ └── Admin#dfarmretail.com-cert.pem
│ │ │ ├── cacerts
│ │ │ │ └── ca.dfarmretail.com-cert.pem
│ │ │ ├── keystore
│ │ │ │ └── 2115fb2c52372041918517c2dcef91cb7cc66ca4a987a1606a98e9b75d78ab91_sk
│ │ │ ├── signcerts
│ │ │ │ └── peer0.dfarmretail.com-cert.pem
│ │ │ └── tlscacerts
│ │ │ └── tlsca.dfarmretail.com-cert.pem
│ │ └── tls
│ │ ├── ca.crt
│ │ ├── server.crt
│ │ └── server.key
│ ├── tlsca
│ │ ├── 8b26e70a303598e0012852426ac93be726210c5911baf4695785cf595bad3041_sk
│ │ └── tlsca.dfarmretail.com-cert.pem
│ └── users
│ ├── Admin#dfarmretail.com
│ │ ├── msp
│ │ │ ├── admincerts
│ │ │ │ └── Admin#dfarmretail.com-cert.pem
│ │ │ ├── cacerts
│ │ │ │ └── ca.dfarmretail.com-cert.pem
│ │ │ ├── keystore
│ │ │ │ └── 7ac01c0d8b0b4f3245d1e68fe34d34a2e1727059c459c1418b68b66870328eb2_sk
│ │ │ ├── signcerts
│ │ │ │ └── Admin#dfarmretail.com-cert.pem
│ │ │ └── tlscacerts
│ │ │ └── tlsca.dfarmretail.com-cert.pem
│ │ └── tls
│ │ ├── ca.crt
│ │ ├── client.crt
│ │ └── client.key
│ └── User1#dfarmretail.com
│ ├── msp
│ │ ├── admincerts
│ │ │ └── User1#dfarmretail.com-cert.pem
│ │ ├── cacerts
│ │ │ └── ca.dfarmretail.com-cert.pem
│ │ ├── keystore
│ │ │ └── e40665832cc9d4fce41f72b04505655f9eb46e3b704547987f03863de37331b5_sk
│ │ ├── signcerts
│ │ │ └── User1#dfarmretail.com-cert.pem
│ │ └── tlscacerts
│ │ └── tlsca.dfarmretail.com-cert.pem
│ └── tls
│ ├── ca.crt
│ ├── client.crt
│ └── client.key
├── crypto-config.yaml
├── docker-compose.yml
├── generate.sh
├── init.sh
├── README.md
├── start.sh
├── stop.sh
└── teardown.sh
docker logs for dfarmretail container
docker logs orderer.dfarmadmin.com
I have tried lot to rectify the issue however I am unable to so could you please help on this
thanks in advance
Is your peer for peer0.dfarmretail.com running OK? (I would check the log for it) In your docker compose file you are configuring both your peers to use the same CouchDB container - but you need to configure a separate CouchDB for each peer. The retail peer maybe failing because of some problem with the CouchDB container already being allocated to the admin peer. The 2nd CouchDB container will have to use a different port, and the retail peer will have to be changed to connect to that new port.
I notice that you are exporting port 7053 on you peer. Port 7053 was used on 'earlier' versions of Fabric for the eventhub I think - what version of Fabric are you using?
You don't have to use CouchDB for your peers, but if you configure your peers to use CouchDB ( CORE_LEDGER_STATE_STATEDATABASE=CouchDB ) then you need a separate CouchDB container for each.
Following updates to the question and comment:
The original error shows a "connection refused" but from the log it looks like the peer is still running. So it looks like some networking error - there is also a line in the dfarmretail peer log showing that the chaincode listen address is using port 7052, whereas I think it should have 8052.
I suggest you add these 2 config lines to the dfarmadmin peer in the docker compose file:
- CORE_PEER_LISTENADDRESS=0.0.0.0:7051
- CORE_PEER_CHAINCODELISTENADDRESS=0.0.0.0:7052
and for dfarmretail peer:
- CORE_PEER_LISTENADDRESS=0.0.0.0:8051
- CORE_PEER_CHAINCODELISTENADDRESS=0.0.0.0:8052
This should clear any port ambiguation and make the peers listen on all interfaces.
You could try the free tool from www.chaincoder.org which will generate all config files for you and let you easily code and deploy chaincodes on peers. Follow here Chaincoder

Electron application SQLITE package has not been found installed

We are struggling with building app for windows 32bit and 64bit.
It is angular 2 application which uses sqlite3 as a database.
Everything works perfectly in a development but after packaging app and running it on windows it throwns error
SQLite package has not been found installed. Try to install it: npm install sqlite3 --save
Here is package.json ( part of it which is important for this issue ):
"scripts": {
"build:aot:prod": "npm run clean:dist && npm run clean:aot && cross-env BUILD_AOT=1 npm run webpack -- --config config/webpack.prod.js --progress --profile --bail",
"build:aot": "npm run build:aot:prod",
"build:dev": "npm run clean:dist && npm run webpack -- --config config/webpack.dev.js --progress --profile",
"build:docker": "npm run build:prod && docker build -t angular2-webpack-start:latest .",
"build:prod": "npm run clean:dist && npm run webpack -- --config config/webpack.prod.js --progress --profile --bail",
"build": "npm run build:dev",
"ci:aot": "npm run lint && npm run test && npm run build:aot && npm run e2e",
"ci:jit": "npm run lint && npm run test && npm run build:prod && npm run e2e",
"ci:nobuild": "npm run lint && npm test && npm run e2e",
"ci:testall": "npm run lint && npm run test && npm run build:prod && npm run e2e && npm run build:aot && npm run e2e",
"ci:travis": "npm run lint && npm run test && npm run build:aot && npm run e2e:travis",
"ci": "npm run ci:testall",
"clean:dll": "npm run rimraf -- dll",
"clean:aot": "npm run rimraf -- compiled",
"clean:dist": "npm run rimraf -- dist",
"clean:install": "npm set progress=false && npm install",
"clean": "npm cache clean --force && npm run rimraf -- node_modules doc coverage dist compiled dll",
"docker": "docker",
"docs": "npm run typedoc -- --options typedoc.json --exclude '**/*.spec.ts' ./src/",
"e2e:live": "npm-run-all -p -r server:prod:ci protractor:live",
"e2e:travis": "npm-run-all -p -r server:prod:ci protractor:delay",
"e2e": "npm-run-all -p -r server:prod:ci protractor",
"github-deploy:dev": "npm run webpack -- --config config/webpack.github-deploy.js --progress --profile --env.githubDev",
"github-deploy:prod": "npm run webpack -- --config config/webpack.github-deploy.js --progress --profile --env.githubProd",
"github-deploy": "npm run github-deploy:dev",
"lint": "npm run tslint \"src/**/*.ts\"",
"node": "node",
"postinstall": "install-app-deps && electron-rebuild",
"postversion": "git push && git push --tags",
"preclean:install": "npm run clean",
"preversion": "npm test",
"protractor": "protractor",
"protractor:delay": "sleep 3 && npm run protractor",
"protractor:live": "protractor --elementExplorer",
"rimraf": "rimraf",
"server:dev:hmr": "npm run server:dev -- --inline --hot",
"server:dev": "npm run webpack-dev-server -- --config config/webpack.dev.js --progress --profile --watch --content-base src/",
"server:prod": "http-server dist -c-1 --cors",
"server:prod:ci": "http-server dist -p 3000 -c-1 --cors",
"server": "npm run server:dev",
"start:hmr": "npm run server:dev:hmr",
"start": "npm run server:dev",
"test": "npm run lint && karma start",
"tslint": "tslint",
"typedoc": "typedoc",
"version": "npm run build",
"watch:dev:hmr": "npm run watch:dev -- --hot",
"watch:dev": "npm run build:dev -- --watch",
"watch:prod": "npm run build:prod -- --watch",
"watch:test": "npm run test -- --auto-watch --no-single-run",
"watch": "npm run watch:dev",
"electron:pre": "copyfiles main.js dist && copyfiles package.json dist && copyfiles ./icons/* ./dist && npm --prefix ./dist install ./dist --production",
"electron:dev": "cross-env NODE_ENV=development electron .",
"electron:prod": "npm run build:aot:prod && npm run electron:pre && electron ./dist",
"electron:linux": "npm run build:aot:prod && npm run electron:pre && node package.js --asar --platform=linux --arch=x64 && cd dist && electron-builder install-app-deps --platform=linux --arch=x64",
"electron:windows": "npm run build:aot:prod && npm run electron:pre && electron-builder install-app-deps --platform=win32 && node package.js --asar --platform=win32",
"electron:mac": "npm run build:aot:prod && npm run electron:pre && node package.js --asar --platform=darwin --arch=x64 && cd dist && electron-builder install-app-deps --platform=darwin --arch=x64",
"webdriver-manager": "webdriver-manager",
"webdriver:start": "npm run webdriver-manager start",
"webdriver:update": "webdriver-manager update",
"webpack-dev-server": "node --max_old_space_size=4096 node_modules/webpack-dev-server/bin/webpack-dev-server.js",
"webpack": "node --max_old_space_size=4096 node_modules/webpack/bin/webpack.js"
},
"dependencies": {
"node-pre-gyp": "^0.6.38",
"sqlite3": "^3.1.9",
"typeorm": "0.1.0-alpha.49",
"uikit": "^3.0.0-beta.30"
},
"devDependencies": {
"#angular/animations": "~4.3.1",
"#angular/common": "~4.3.1",
"#angular/compiler": "~4.3.1",
"#angular/compiler-cli": "~4.3.1",
"#angular/core": "~4.3.1",
"#angular/forms": "~4.3.1",
"#angular/http": "~4.3.1",
"#angular/platform-browser": "~4.3.1",
"#angular/platform-browser-dynamic": "~4.3.1",
"#angular/platform-server": "~4.3.1",
"#angular/router": "~4.3.1",
"#angularclass/hmr": "~1.2.2",
"#angularclass/hmr-loader": "^3.0.4",
"#ngrx/effects": "^4.0.5",
"#ngrx/store": "^4.0.3",
"#types/hammerjs": "^2.0.34",
"#types/jasmine": "2.5.45",
"#types/node": "^7.0.39",
"#types/source-map": "^0.5.0",
"#types/uglify-js": "^2.6.28",
"#types/webpack": "^2.2.16",
"add-asset-html-webpack-plugin": "^1.0.2",
"angular2-template-loader": "^0.6.2",
"assets-webpack-plugin": "^3.5.1",
"awesome-typescript-loader": "~3.1.2",
"codelyzer": "~2.1.1",
"copy-webpack-plugin": "^4.0.1",
"copyfiles": "^1.2.0",
"core-js": "^2.4.1",
"cross-env": "^5.0.0",
"css-loader": "^0.28.0",
"electron": "1.7.5",
"electron-builder": "^19.27.7",
"electron-packager": "8.7.2",
"electron-rebuild": "^1.6.0",
"electron-reload": "^1.1.0",
"exports-loader": "^0.6.4",
"expose-loader": "^0.7.3",
"extract-text-webpack-plugin": "~2.1.0",
"file-loader": "^0.11.1",
"find-root": "^1.0.0",
"gh-pages": "^1.0.0",
"html-webpack-plugin": "^2.28.0",
"http-server": "^0.9.0",
"ie-shim": "^0.1.0",
"imports-loader": "^0.7.1",
"inline-manifest-webpack-plugin": "^3.0.1",
"istanbul-instrumenter-loader": "2.0.0",
"jasmine-core": "^2.5.2",
"jquery": "^3.2.1",
"karma": "^1.6.0",
"karma-chrome-launcher": "^2.0.0",
"karma-coverage": "^1.1.1",
"karma-jasmine": "^1.1.0",
"karma-mocha-reporter": "^2.2.3",
"karma-remap-coverage": "^0.1.4",
"karma-sourcemap-loader": "^0.3.7",
"karma-webpack": "^2.0.4",
"less": "^2.7.2",
"less-loader": "^4.0.5",
"ng-router-loader": "^2.1.0",
"ngc-webpack": "^3.2.0",
"node-sass": "^4.5.2",
"npm-run-all": "^4.0.2",
"optimize-js-plugin": "0.0.4",
"parse5": "^3.0.2",
"preload-webpack-plugin": "^1.2.2",
"protractor": "^5.1.1",
"raw-loader": "0.5.1",
"reflect-metadata": "^0.1.10",
"rimraf": "~2.6.1",
"rxjs": "~5.0.2",
"sass-loader": "^6.0.3",
"script-ext-html-webpack-plugin": "^1.8.5",
"source-map-loader": "^0.2.1",
"string-replace-loader": "~1.2.0",
"style-loader": "^0.18.1",
"to-string-loader": "^1.1.5",
"ts-node": "^3.3.0",
"tslib": "^1.7.1",
"tslint": "~4.5.1",
"tslint-loader": "^3.5.2",
"typedoc": "^0.7.1",
"typescript": "2.5.0",
"uglify-js": "git://github.com/mishoo/UglifyJS2#harmony-v2.8.22",
"uglifyjs-webpack-plugin": "0.4.3",
"url-loader": "^0.5.8",
"webpack": "~2.6.1",
"webpack-dev-middleware": "^1.10.1",
"webpack-dev-server": "~2.4.2",
"webpack-dll-bundles-plugin": "^1.0.0-beta.5",
"webpack-merge": "~4.1.0",
"zone.js": "0.8.14"
},
After running npm run electron:windows everything is good and here is output:
/home/haris/.nvm/versions/node/v6.9.4/bin/node /home/haris/.nvm/versions/node/v6.9.4/lib/node_modules/npm/bin/npm-cli.js run electron:windows --scripts-prepend-node-path=auto
> angular-electron-starter#1.0.0 electron:windows /home/haris/development/walter/bitbucket-
> npm run build:aot:prod && npm run electron:pre && electron-builder install-app-deps --platform=win32 && node package.js --asar --platform=win32
> angular-electron-starter#1.0.0 build:aot:prod /home/haris/development/walter/bitbucket-
> npm run clean:dist && npm run clean:aot && cross-env BUILD_AOT=1 npm run webpack -- --config config/webpack.prod.js --progress --profile --bail
> angular-electron-starter#1.0.0 clean:dist /home/haris/development/walter/bitbucket-
> npm run rimraf -- dist
> angular-electron-starter#1.0.0 rimraf /home/haris/development/walter/bitbucket-
> rimraf "dist"
> angular-electron-starter#1.0.0 clean:aot /home/haris/development/walter/bitbucket-
> npm run rimraf -- compiled
> angular-electron-starter#1.0.0 rimraf /home/haris/development/walter/bitbucket-
> rimraf "compiled"
> angular-electron-starter#1.0.0 webpack /home/haris/development/walter/bitbucket-
> node --max_old_space_size=4096 node_modules/webpack/bin/webpack.js "--config" "config/webpack.prod.js" "--progress" "--profile" "--bail"
Starting compilation using the angular compiler.
Angular compilation done, starting webpack bundling.
0% compiling
10% building modules 0/1 modules 1 active ...ntent-manager/src/main.browser.aot.ts
10% building modules 0/2 modules 2 active ...tent-manager/src/polyfills.browser.ts
[at-loader] Using typescript#2.5.0 from typescript and "tsconfig.json" from /home/haris/development/walter/bitbucket-/tsconfig.webpack.json.
10% building modules 1/2 modules 1 active ...tent-manager/src/polyfills.browser.ts
# I removed building modules proccess because of limit of characters on stackoverflow.
25067ms additional asset processing
92% chunk asset optimization
3538ms chunk asset optimization
94% asset optimization
[at-loader] Checking started in a separate process...
[at-loader] Ok, 2.38 sec.
2788ms asset optimization
95% emitting
18ms emitting
Hash: a3f29d769fb284afcae1
Version: webpack 2.6.1
Time: 62001ms
[emitted]
WARNING in ./~/typeorm/platform/PlatformTools.js
33:19-32 Critical dependency: the request of a dependency is an expression
WARNING in ./~/typeorm/platform/PlatformTools.js
37:23-85 Critical dependency: the request of a dependency is an expression
Child html-webpack-plugin for "index.html":
[3IRH] (webpack)/buildin/module.js 517 bytes {0} [built]
[] -> factory:36ms building:174ms = 210ms
[7GO9] ./~/html-webpack-plugin/lib/loader.js!./src/index.html 2.2 kB {0} [built]
factory:6ms building:11ms = 17ms
[DuR2] (webpack)/buildin/global.js 509 bytes {0} [built]
[] -> factory:36ms building:174ms = 210ms
[M4fF] ./~/lodash/lodash.js 540 kB {0} [built]
[] -> factory:83ms building:3556ms = 3639ms
Child extract-text-webpack-plugin:
[9rjH] ./~/css-loader!./src/styles/headings.css 166 bytes {0} [built]
factory:2ms building:17ms = 19ms
[FZ+f] ./~/css-loader/lib/css-base.js 2.26 kB {0} [built]
[] -> factory:0ms building:2ms = 2ms
Child extract-text-webpack-plugin:
[FZ+f] ./~/css-loader/lib/css-base.js 2.26 kB {0} [built]
[] -> factory:0ms building:1ms = 1ms
[pZge] ./~/css-loader!./~/less-loader/dist/cjs.js!./src/styles/styles.less 256 kB {0} [built]
factory:3ms building:5063ms = 5066ms
> angular-electron-starter#1.0.0 electron:pre /home/haris/development/walter/bitbucket-
> copyfiles main.js dist && copyfiles package.json dist && copyfiles ./icons/* ./dist && npm --prefix ./dist install ./dist --production
> sqlite3#3.1.13 install /home/haris/development/walter/bitbucket-/dist/node_modules/sqlite3
> node-pre-gyp install --fallback-to-build
[sqlite3] Success: "/home/haris/development/walter/bitbucket-/dist/node_modules/sqlite3/lib/binding/node-v48-linux-x64/node_sqlite3.node" is installed via remote
> angular-electron-starter#1.0.0 postinstall /home/haris/development/walter/bitbucket-/dist
> install-app-deps && electron-rebuild
Warning: Please use as subcommand: electron-builder install-app-deps
electron-builder 19.36.0
Rebuilding native production dependencies for linux:x64
✔ Rebuild Complete
angular-electron-starter#1.0.0 /home/haris/development/walter/bitbucket-/dist
├─┬ node-pre-gyp#0.6.38
│ ├─┬ hawk#3.1.3
│ │ ├── boom#2.10.1
│ │ ├── cryptiles#2.0.5
│ │ ├── hoek#2.16.3
│ │ └── sntp#1.0.9
│ ├─┬ mkdirp#0.5.1
│ │ └── minimist#0.0.8
│ ├─┬ nopt#4.0.1
│ │ ├── abbrev#1.1.1
│ │ └─┬ osenv#0.1.4
│ │ ├── os-homedir#1.0.2
│ │ └── os-tmpdir#1.0.2
│ ├─┬ npmlog#4.1.2
│ │ ├─┬ are-we-there-yet#1.1.4
│ │ │ └── delegates#1.0.0
│ │ ├── console-control-strings#1.1.0
│ │ ├─┬ gauge#2.7.4
│ │ │ ├── aproba#1.2.0
│ │ │ ├── has-unicode#2.0.1
│ │ │ ├── object-assign#4.1.1
│ │ │ ├── signal-exit#3.0.2
│ │ │ ├─┬ string-width#1.0.2
│ │ │ │ ├── code-point-at#1.1.0
│ │ │ │ └─┬ is-fullwidth-code-point#1.0.0
│ │ │ │ └── number-is-nan#1.0.1
│ │ │ ├─┬ strip-ansi#3.0.1
│ │ │ │ └── ansi-regex#2.1.1
│ │ │ └── wide-align#1.1.2
│ │ └── set-blocking#2.0.0
│ ├─┬ rc#1.2.1
│ │ ├── deep-extend#0.4.2
│ │ ├── ini#1.3.4
│ │ ├── minimist#1.2.0
│ │ └── strip-json-comments#2.0.1
│ ├─┬ request#2.81.0
│ │ ├── aws-sign2#0.6.0
│ │ ├── aws4#1.6.0
│ │ ├── caseless#0.12.0
│ │ ├─┬ combined-stream#1.0.5
│ │ │ └── delayed-stream#1.0.0
│ │ ├── extend#3.0.1
│ │ ├── forever-agent#0.6.1
│ │ ├─┬ form-data#2.1.4
│ │ │ └── asynckit#0.4.0
│ │ ├─┬ har-validator#4.2.1
│ │ │ ├─┬ ajv#4.11.8
│ │ │ │ ├── co#4.6.0
│ │ │ │ └─┬ json-stable-stringify#1.0.1
│ │ │ │ └── jsonify#0.0.0
│ │ │ └── har-schema#1.0.5
│ │ ├─┬ http-signature#1.1.1
│ │ │ ├── assert-plus#0.2.0
│ │ │ ├─┬ jsprim#1.4.1
│ │ │ │ ├── assert-plus#1.0.0
│ │ │ │ ├── extsprintf#1.3.0
│ │ │ │ ├── json-schema#0.2.3
│ │ │ │ └─┬ verror#1.10.0
│ │ │ │ └── assert-plus#1.0.0
│ │ │ └─┬ sshpk#1.13.1
│ │ │ ├── asn1#0.2.3
│ │ │ ├── assert-plus#1.0.0
│ │ │ ├── bcrypt-pbkdf#1.0.1
│ │ │ ├─┬ dashdash#1.14.1
│ │ │ │ └── assert-plus#1.0.0
│ │ │ ├── ecc-jsbn#0.1.1
│ │ │ ├─┬ getpass#0.1.7
│ │ │ │ └── assert-plus#1.0.0
│ │ │ ├── jsbn#0.1.1
│ │ │ └── tweetnacl#0.14.5
│ │ ├── is-typedarray#1.0.0
│ │ ├── isstream#0.1.2
│ │ ├── json-stringify-safe#5.0.1
│ │ ├─┬ mime-types#2.1.17
│ │ │ └── mime-db#1.30.0
│ │ ├── oauth-sign#0.8.2
│ │ ├── performance-now#0.2.0
│ │ ├── qs#6.4.0
│ │ ├── safe-buffer#5.1.1
│ │ ├── stringstream#0.0.5
│ │ ├─┬ tough-cookie#2.3.3
│ │ │ └── punycode#1.4.1
│ │ ├── tunnel-agent#0.6.0
│ │ └── uuid#3.1.0
│ ├── semver#5.4.1
│ ├─┬ tar#2.2.1
│ │ ├── block-stream#0.0.9
│ │ ├─┬ fstream#1.0.11
│ │ │ └── graceful-fs#4.1.11
│ │ └── inherits#2.0.3
│ └─┬ tar-pack#3.4.0
│ ├─┬ debug#2.6.9
│ │ └── ms#2.0.0
│ ├── fstream-ignore#1.0.5
│ ├─┬ once#1.4.0
│ │ └── wrappy#1.0.2
│ ├─┬ readable-stream#2.3.3
│ │ ├── core-util-is#1.0.2
│ │ ├── isarray#1.0.0
│ │ ├── process-nextick-args#1.0.7
│ │ ├── string_decoder#1.0.3
│ │ └── util-deprecate#1.0.2
│ └── uid-number#0.0.6
├── reflect-metadata#0.1.10
├─┬ rimraf#2.6.2
│ └─┬ glob#7.1.2
│ ├── fs.realpath#1.0.0
│ ├── inflight#1.0.6
│ ├─┬ minimatch#3.0.4
│ │ └─┬ brace-expansion#1.1.8
│ │ ├── balanced-match#1.0.0
│ │ └── concat-map#0.0.1
│ └── path-is-absolute#1.0.1
├─┬ sqlite3#3.1.13
│ ├── nan#2.7.0
│ └─┬ node-pre-gyp#0.6.38
│ ├─┬ hawk#3.1.3
│ │ ├── boom#2.10.1
│ │ ├── cryptiles#2.0.5
│ │ ├── hoek#2.16.3
│ │ └── sntp#1.0.9
│ ├─┬ mkdirp#0.5.1
│ │ └── minimist#0.0.8
│ ├─┬ nopt#4.0.1
│ │ ├── abbrev#1.1.1
│ │ └─┬ osenv#0.1.4
│ │ ├── os-homedir#1.0.2
│ │ └── os-tmpdir#1.0.2
│ ├─┬ npmlog#4.1.2
│ │ ├─┬ are-we-there-yet#1.1.4
│ │ │ └── delegates#1.0.0
│ │ ├── console-control-strings#1.1.0
│ │ ├─┬ gauge#2.7.4
│ │ │ ├── aproba#1.2.0
│ │ │ ├── has-unicode#2.0.1
│ │ │ ├── object-assign#4.1.1
│ │ │ ├── signal-exit#3.0.2
│ │ │ ├─┬ string-width#1.0.2
│ │ │ │ ├── code-point-at#1.1.0
│ │ │ │ └─┬ is-fullwidth-code-point#1.0.0
│ │ │ │ └── number-is-nan#1.0.1
│ │ │ ├─┬ strip-ansi#3.0.1
│ │ │ │ └── ansi-regex#2.1.1
│ │ │ └── wide-align#1.1.2
│ │ └── set-blocking#2.0.0
│ ├─┬ rc#1.2.1
│ │ ├── deep-extend#0.4.2
│ │ ├── ini#1.3.4
│ │ ├── minimist#1.2.0
│ │ └── strip-json-comments#2.0.1
│ ├─┬ request#2.81.0
│ │ ├── aws-sign2#0.6.0
│ │ ├── aws4#1.6.0
│ │ ├── caseless#0.12.0
│ │ ├─┬ combined-stream#1.0.5
│ │ │ └── delayed-stream#1.0.0
│ │ ├── extend#3.0.1
│ │ ├── forever-agent#0.6.1
│ │ ├─┬ form-data#2.1.4
│ │ │ └── asynckit#0.4.0
│ │ ├─┬ har-validator#4.2.1
│ │ │ ├─┬ ajv#4.11.8
│ │ │ │ ├── co#4.6.0
│ │ │ │ └─┬ json-stable-stringify#1.0.1
│ │ │ │ └── jsonify#0.0.0
│ │ │ └── har-schema#1.0.5
│ │ ├─┬ http-signature#1.1.1
│ │ │ ├── assert-plus#0.2.0
│ │ │ ├─┬ jsprim#1.4.1
│ │ │ │ ├── assert-plus#1.0.0
│ │ │ │ ├── extsprintf#1.3.0
│ │ │ │ ├── json-schema#0.2.3
│ │ │ │ └─┬ verror#1.10.0
│ │ │ │ └── assert-plus#1.0.0
│ │ │ └─┬ sshpk#1.13.1
│ │ │ ├── asn1#0.2.3
│ │ │ ├── assert-plus#1.0.0
│ │ │ ├── bcrypt-pbkdf#1.0.1
│ │ │ ├─┬ dashdash#1.14.1
│ │ │ │ └── assert-plus#1.0.0
│ │ │ ├── ecc-jsbn#0.1.1
│ │ │ ├─┬ getpass#0.1.7
│ │ │ │ └── assert-plus#1.0.0
│ │ │ ├── jsbn#0.1.1
│ │ │ └── tweetnacl#0.14.5
│ │ ├── is-typedarray#1.0.0
│ │ ├── isstream#0.1.2
│ │ ├── json-stringify-safe#5.0.1
│ │ ├─┬ mime-types#2.1.17
│ │ │ └── mime-db#1.30.0
│ │ ├── oauth-sign#0.8.2
│ │ ├── performance-now#0.2.0
│ │ ├── qs#6.4.0
│ │ ├── safe-buffer#5.1.1
│ │ ├── stringstream#0.0.5
│ │ ├─┬ tough-cookie#2.3.3
│ │ │ └── punycode#1.4.1
│ │ ├── tunnel-agent#0.6.0
│ │ └── uuid#3.1.0
│ ├─┬ rimraf#2.6.2
│ │ └─┬ glob#7.1.2
│ │ ├── fs.realpath#1.0.0
│ │ ├── inflight#1.0.6
│ │ ├─┬ minimatch#3.0.4
│ │ │ └─┬ brace-expansion#1.1.8
│ │ │ ├── balanced-match#1.0.0
│ │ │ └── concat-map#0.0.1
│ │ └── path-is-absolute#1.0.1
│ ├── semver#5.4.1
│ ├─┬ tar#2.2.1
│ │ ├── block-stream#0.0.9
│ │ ├─┬ fstream#1.0.11
│ │ │ └── graceful-fs#4.1.11
│ │ └── inherits#2.0.3
│ └─┬ tar-pack#3.4.0
│ ├─┬ debug#2.6.9
│ │ └── ms#2.0.0
│ ├── fstream-ignore#1.0.5
│ ├─┬ once#1.4.0
│ │ └── wrappy#1.0.2
│ ├─┬ readable-stream#2.3.3
│ │ ├── core-util-is#1.0.2
│ │ ├── isarray#1.0.0
│ │ ├── process-nextick-args#1.0.7
│ │ ├── string_decoder#1.0.3
│ │ └── util-deprecate#1.0.2
│ └── uid-number#0.0.6
├─┬ typeorm#0.1.0-alpha.49
│ ├── app-root-path#2.0.1
│ ├─┬ chalk#2.1.0
│ │ ├─┬ ansi-styles#3.2.0
│ │ │ └─┬ color-convert#1.9.0
│ │ │ └── color-name#1.1.3
│ │ ├── escape-string-regexp#1.0.5
│ │ └─┬ supports-color#4.4.0
│ │ └── has-flag#2.0.0
│ ├─┬ cli-highlight#1.1.4
│ │ ├─┬ chalk#1.1.3
│ │ │ ├── ansi-styles#2.2.1
│ │ │ ├── has-ansi#2.0.0
│ │ │ └── supports-color#2.0.0
│ │ ├── he#1.1.1
│ │ ├── highlight.js#9.12.0
│ │ ├─┬ mz#2.7.0
│ │ │ ├── any-promise#1.3.0
│ │ │ └─┬ thenify-all#1.6.0
│ │ │ └── thenify#3.3.0
│ │ └─┬ yargs#4.8.1
│ │ ├── lodash.assign#4.2.0
│ │ ├── os-locale#1.4.0
│ │ ├─┬ read-pkg-up#1.0.1
│ │ │ ├─┬ find-up#1.1.2
│ │ │ │ ├── path-exists#2.1.0
│ │ │ │ └─┬ pinkie-promise#2.0.1
│ │ │ │ └── pinkie#2.0.4
│ │ │ └─┬ read-pkg#1.1.0
│ │ │ ├─┬ load-json-file#1.1.0
│ │ │ │ └─┬ strip-bom#2.0.0
│ │ │ │ └── is-utf8#0.2.1
│ │ │ └── path-type#1.1.0
│ │ ├── which-module#1.0.0
│ │ ├── window-size#0.2.0
│ │ └─┬ yargs-parser#2.4.1
│ │ └── camelcase#3.0.0
│ ├── dotenv#4.0.0
│ ├─┬ js-yaml#3.10.0
│ │ ├─┬ argparse#1.0.9
│ │ │ └── sprintf-js#1.0.3
│ │ └── esprima#4.0.0
│ ├─┬ xml2js#0.4.19
│ │ ├── sax#1.2.4
│ │ └── xmlbuilder#9.0.4
│ ├─┬ yargonaut#1.1.2
│ │ ├─┬ chalk#1.1.3
│ │ │ ├── ansi-styles#2.2.1
│ │ │ └── supports-color#2.0.0
│ │ ├── figlet#1.2.0
│ │ └── parent-require#1.0.0
│ └─┬ yargs#9.0.1
│ ├── camelcase#4.1.0
│ ├─┬ cliui#3.2.0
│ │ └── wrap-ansi#2.1.0
│ ├── decamelize#1.2.0
│ ├── get-caller-file#1.0.2
│ ├─┬ os-locale#2.1.0
│ │ ├─┬ execa#0.7.0
│ │ │ ├─┬ cross-spawn#5.1.0
│ │ │ │ ├─┬ lru-cache#4.1.1
│ │ │ │ │ ├── pseudomap#1.0.2
│ │ │ │ │ └── yallist#2.1.2
│ │ │ │ ├─┬ shebang-command#1.2.0
│ │ │ │ │ └── shebang-regex#1.0.0
│ │ │ │ └─┬ which#1.3.0
│ │ │ │ └── isexe#2.0.0
│ │ │ ├── get-stream#3.0.0
│ │ │ ├── is-stream#1.1.0
│ │ │ ├─┬ npm-run-path#2.0.2
│ │ │ │ └── path-key#2.0.1
│ │ │ ├── p-finally#1.0.0
│ │ │ └── strip-eof#1.0.0
│ │ ├─┬ lcid#1.0.0
│ │ │ └── invert-kv#1.0.0
│ │ └─┬ mem#1.1.0
│ │ └── mimic-fn#1.1.0
│ ├─┬ read-pkg-up#2.0.0
│ │ ├─┬ find-up#2.1.0
│ │ │ └─┬ locate-path#2.0.0
│ │ │ ├─┬ p-locate#2.0.0
│ │ │ │ └── p-limit#1.1.0
│ │ │ └── path-exists#3.0.0
│ │ └─┬ read-pkg#2.0.0
│ │ ├─┬ load-json-file#2.0.0
│ │ │ ├─┬ parse-json#2.2.0
│ │ │ │ └─┬ error-ex#1.3.1
│ │ │ │ └── is-arrayish#0.2.1
│ │ │ ├── pify#2.3.0
│ │ │ └── strip-bom#3.0.0
│ │ ├─┬ normalize-package-data#2.4.0
│ │ │ ├── hosted-git-info#2.5.0
│ │ │ ├─┬ is-builtin-module#1.0.0
│ │ │ │ └── builtin-modules#1.1.1
│ │ │ └─┬ validate-npm-package-license#3.0.1
│ │ │ ├─┬ spdx-correct#1.0.2
│ │ │ │ └── spdx-license-ids#1.2.2
│ │ │ └── spdx-expression-parse#1.0.4
│ │ └── path-type#2.0.0
│ ├── require-directory#2.1.1
│ ├── require-main-filename#1.0.1
│ ├─┬ string-width#2.1.1
│ │ ├── is-fullwidth-code-point#2.0.0
│ │ └─┬ strip-ansi#4.0.0
│ │ └── ansi-regex#3.0.0
│ ├── which-module#2.0.0
│ ├── y18n#3.2.1
│ └── yargs-parser#7.0.0
└── uikit#3.0.0-beta.30
electron-builder 19.36.0
Rebuilding native production dependencies for win32:x64
Packaging app for platform win32 ia32 using electron v1.7.5
Packaging app for platform win32 x64 using electron v1.7.5
Application packaged successfully! [ 'app-builds/angular-electron-starter-win32-ia32',
'app-builds/angular-electron-starter-win32-x64' ]
Process finished with exit code 0
But when I run .exe on windows I'm getting error that I mentioned above how sqlite package has not been found.
SQLite3 is a native Node.js module so it can't be used directly with Electron without rebuilding it to target Electron.The electron-builder will build the native module for our platform and we can then require it in code as normal.
The following are steps you need to follow.
First, we need to add a postinstall step in your package.json:
"scripts": {
"postinstall": "install-app-deps"
...
}
and then install the necessary dependencies and build:
npm install --save-dev electron-builder
npm install --save sqlite3
npm run postinstall
I have used this same procedure in windows 7(32 bit) and also Windows 10(64 bit). I did not face any problem regarding this.

Resources