The following error occurs when I start the dotdocker sudo dotdocker start for my project. There was no error before and it was working successfully. However, this error appeared without any changing on the project code. The version of docker-compose and dotdocker are 1.21.2 and 1.4.2 respectively. I tried several things to solve the problem but nothing is working such as::
1 - sudo dotstart stop
2 - sudo systemctl restart NetworkManager
3 - sudo service docker restart
❯ Start dotdocker containers
❯ Start proxy
↓ Pulling codekitchen/dinghy-http-proxy:latest [skipped]
→ Image already exists
↓ Creating dotdocker-proxy [skipped]
→ Container already created
✖ Starting dotdocker-proxy
→ (HTTP code 500) server error - driver failed programming external conn
…
✔ Start dnsmasq
Setting up DNS
(node:4314) UnhandledPromiseRejectionWarning: Error: (HTTP code 500) server error - driver failed programming external connectivity on endpoint dotdocker-proxy (4e8078f31f05224f8041e65026e179114701636bbd13eb71a67382ccae860db9): Error starting userland proxy: listen tcp 0.0.0.0:80: bind: address already in use
at /usr/local/lib/node_modules/dotdocker/node_modules/docker-modem/lib/modem.js:257:17
at getCause (/usr/local/lib/node_modules/dotdocker/node_modules/docker-modem/lib/modem.js:287:7)
at Modem.buildPayload (/usr/local/lib/node_modules/dotdocker/node_modules/docker-modem/lib/modem.js:256:5)
at IncomingMessage.<anonymous> (/usr/local/lib/node_modules/dotdocker/node_modules/docker-modem/lib/modem.js:232:14)
at IncomingMessage.emit (events.js:326:22)
at endReadableNT (_stream_readable.js:1244:12)
at processTicksAndRejections (internal/process/task_queues.js:80:21)
(Use `node --trace-warnings ...` to show where the warning was created)
(node:4314) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). To terminate the node process on unhandled promise rejection, use the CLI flag `--unhandled-rejections=strict` (see https://nodejs.org/api/cli.html#cli_unhandled_rejections_mode). (rejection id: 1)
(node:4314) [DEP0018] DeprecationWarning: Unhandled promise rejections are deprecated. In the future, promise rejections that are not handled will terminate the Node.js process with a non-zero exit code.
I haven't used the dotdocker but from what I see from the error and specifically this part:
tcp 0.0.0.0:80: bind: address already in use
Something in your system is already using port 80 that your program wants to use.
In order to see what is using port 80 you can try:
lsof -P -S 2 -i "tcp:80" | grep "\(:80->.*:\|:80 (LISTEN)$\)"
This will give you the info and the process id in the second column.
For example:
apache2 1914 root 4u IPv6 11920 0t0 TCP *:80 (LISTEN)
You can then kill this process that occupying the port you want to use by:
kill -KILL 1914
Related
First time rabbitmq user here. I am using the following command to start rabbitmq using docker.
docker run --rm -it -p 15672:15672 -p 5672:5672 rabbitmq:3-management
I can see that rabbitmq is started and i can open the management cosnole as well. When the individual services in the nestjs application are started, i see queues are getting created on the rabbitmq. But whenever the services try to communicate, following error appears in rabbitmq
2022-10-13 08:50:46.763870+00:00 [info] <0.1222.0> connection <0.1222.0> (172.17.0.1:62850 -> 172.17.0.2:5672): user 'guest' authenticated and granted access to vhost '/'
2022-10-13 08:50:46.773788+00:00 [error] <0.1231.0> Channel error on connection <0.1222.0> (172.17.0.1:62850 -> 172.17.0.2:5672, vhost: '/', user: 'guest'), channel 1:
2022-10-13 08:50:46.773788+00:00 [error] <0.1231.0> operation basic.publish caused a channel exception precondition_failed: fast reply consumer does not exist
2022-10-13 08:50:46.785108+00:00 [warning] <0.1222.0> closing AMQP connection <0.1222.0> (172.17.0.1:62850 -> 172.17.0.2:5672, vhost: '/', user: 'guest'):
2022-10-13 08:50:46.785108+00:00 [warning] <0.1222.0> client unexpectedly closed TCP connection
I am running this on an apple silicon based mac book.
Is there any idea why this error appearing?
Looks like this is the issue.
https://github.com/nestjs/nest/issues/7972
I had to adjust the package versions accordingly to solve this.
When I try composer-rest-server -c acme-admin#test-bna I get this output:
Discovering the Returning Transactions..
Discovered types from business network definition
Generating schemas for all types in business network definition ...
Generated schemas for all types in business network definition
Adding schemas for all types to Loopback ...
Added schemas for all types to Loopback
events.js:183
throw er; // Unhandled 'error' event
^
Error: listen EADDRINUSE :::3000
at Server.setupListenHandle [as _listen2] (net.js:1360:14)
at listenInCluster (net.js:1401:12)
at Server.listen (net.js:1485:7)
at module.exports.promise.then.then (/usr/local/lib/node_modules/composer-rest-server/cli.js:143:19)
at <anonymous>
at process._tickCallback (internal/process/next_tick.js:189:7)
I am actually doing a udemy course about how to setup a hyperledger multi org network.
The error is showing that the port 3000 is busy.
EADDRINUSE means some process is already running on that port.
You can find the process by using the following command:
sudo ss -lptn 'sport = :3000'
It will return you process id, then you can kill it by:
sudo kill -9 process_id
The problem was that some process was running on the port 3000. This command fixed it.
fuser -n tcp -k 3000
I am trying to deploy a business network using command:
composer network start -c PeerAdmin#hlfv1 -n test-bna -V 0.0.1 -A admin -S adminpw
And it is failing with error:
Error trying to start business network. Error: Failed to connect to
any peer event hubs. It is required that at least 1 event hub has been
connected to receive the commit event.
On checking the composer logs, it says:
2018-11-08T13:30:59.190Z WARN :HLFConnection
:_connectToEventHubs() event hub localhost:7051 failed to connect:
12 UNIMPLEMENTED: unknown service protos.Deliver {}$
2018-11-08T13:31:46.763Z WARN :HLFConnection
:_connectToEventHubs() event hub localhost:7051 failed to connect:
12 UNIMPLEMENTED: unknown service protos.Deliver {}$
Could someone please help with resolving this?
I want to use composer-rest-server to generate rest api.
1,I run composer network ping -c admin#trade-network
the result is :
```The connection to the network was successfully tested: trade-network
Business network version: 0.2.6-20180530153450
Composer runtime version: 0.19.8
participant: org.hyperledger.composer.system.NetworkAdmin#admin
identity: org.hyperledger.composer.system.Identity#8633aef10e9d998be8bec4bb4ab535eb74e3d6832cb21286b89cadf0e95863c5
Command succeeded
2,run `docker run -e COMPOSER_CARD=admin#trade-network -e COMPOSER_NAMESPACES=never --name rest -p 3000:3000 hyperledger/composer-rest-server`
while error appears:
[2018-06-15 08:18:17] PM2 log: Launching in no daemon mode
[2018-06-15 08:18:17] PM2 log: Starting execution sequence in -fork mode- for app name:composer-rest-server id:0
[2018-06-15 08:18:17] PM2 log: App name:composer-rest-server id:0 online
WARNING: NODE_APP_INSTANCE value of '0' did not match any instance config file names.
WARNING: See https://github.com/lorenwest/node-config/wiki/Strict-Mode
Discovering types from business network definition ...
Exception: Error: Error trying to ping. Error: REQUEST_TIMEOUT
Connection fails: Error: Error trying to ping. Error: REQUEST_TIMEOUT
It will be retried for the next request.
Error: Error trying to ping. Error: REQUEST_TIMEOUT
at _checkRuntimeVersions.then.catch (/home/composer/.npm-global/lib/node_modules/composer-rest-server/node_modules/composer-connector-hlfv1/lib/hlfconnection.js:790:34)
at
at process._tickDomainCallback (internal/process/next_tick.js:228:7)
[2018-06-15 08:23:21] PM2 log: App [composer-rest-server] with id [0] and pid [15], exited with code [1] via signal [SIGINT]
[2018-06-15 08:23:21] PM2 log: Starting execution sequence in -fork mode- for app name:composer-rest-server id:0
[2018-06-15 08:23:21] PM2 log: App name:composer-rest-server id:0 online
WARNING: NODE_APP_INSTANCE value of '0' did not match any instance config file names.
WARNING: See https://github.com/lorenwest/node-config/wiki/Strict-Mode
Discovering types from business network definition ...
Connection fails: Error: Error trying to ping. Error: REQUEST_TIMEOUT
It will be retried for the next request.
Exception: Error: Error trying to ping. Error: REQUEST_TIMEOUT
Error: Error trying to ping. Error: REQUEST_TIMEOUT
```
so I want to ask why?
This is a networking issue ... The URL values in your connection.json (part of the "Card") have localhost in them which works fine from the command line of your computer where Docker sets up Port forwarding into the containers of the Fabric. When you use the same card inside a rest container localhost loops back into the container and does not "see" the Fabric.
Step 6 of this tutorial deals with the problem by creating a special restadmin card, and modifying the addresses in the connection.json to 'find' the Fabric - Google Oauth2 Tutorial.
I assume you are already sharing a volume on the docker run command so that the Rest Container can find the folder with the cards. E.g. -v ~/.composer:/home/composer/.composer
My docker service (epst) fails to start if I'm also running VSCode. The error is:
ERROR: for epst Cannot start service epst: driver failed programming external connectivity on endpoint epst (long-hash): Error starting userland proxy: Bind for 0.0.0.0:5123 failed: port is already allocated
ERROR: Encountered errors while bringing up the project.
If I shut down VSCode and re-launch docker-compose, then everything comes up fine.
So my question is how do I identify what is binding to port 5123 in VSCode?
I believe you might be looking for lsof -i :5123?
See man page for lsof.
This would the return a list of processes running on the port you entered (5123).
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
appName 5123 yourUser -- ---- -------------------------
You could then kill 5123 to free up the desired port.