how to open up docker-compose up when given these issues - docker

docker-compose up has no configuration file provided: not found
this is given that I am in the directory where the docker-compose-LocalExecutor.yml file is.
What should my next steps be to be able to open airflow in localhost:8080
Note: I am using windows
Looking at previous stackoverflow posts, I have also tried docker-compose -f .\docker-compose-LocalExecutor.yml up, but it returns:
Error response from daemon: driver failed programming external connectivity on endpoint docker-airflow-master-webserver-1 (f5311e617e9076840eeda2343df931c4716b9ae33817ba1ed91c10c272df9766): Bind for 0.0.0.0:8080 failed: port is already allocated

Related

What is the correct approach to create & start an application channel in Hyperledger Fabric? (Edited)

TL;DR
What is the correct method for creating a channel in HF? I've tried 4 different approaches, each ending in failure.
Troubleshooting of errors is welcome in the comments, but in this post, what I'd like to ask is what is the correct approach/method/setup to create a channel in Hyperledger Fabric?
Edit Note
My original post was asking multiple questions. I've updated this post to focus on the first question and put my second question focusing on troubleshooting a specific approach here.
Approach => Result
Run channel create on host (no local core.yaml) => core config not found
Run channel create in Docker container => no fabric binaries (could not execute)
Run channel create on host (with local core.yaml) => failed to connect to orderer
Run channel create on host (with linked core.yaml) => failed to start container
Background:
I'm building a customized HF network (using the Test Network as a reference). I've setup the CAs, generated the MSPs, started the nodes (1 Peer, 1 Orderer), generated the Orderer genesis block (and then restarted the Orderer node).
I'm using Docker swarm, but so far, I'm only focusing on the first host (which is running the above 4 containers: 2 nodes and respective CAs).
Here's more detail on each approach I've tried:
1. Run channel create from host with no local core.yaml file.
After packaging a channel Tx, I try to create a channel using the following command:
peer channel create -o $host:1050 -c $CHANNEL_NAME --ordererTLSHostnameOverride $ORG -f ./channel-artifacts/${CHANNEL_NAME}.tx --outputBlock ./channel-artifacts/${CHANNEL_NAME}.block --tls --cafile $ORDERER_CA
I get the following error:
020-09-15 16:37:06.186 MST [main] InitCmd -> ERRO 001 Fatal error when initializing core config : Could not find config file. Please make sure that FABRIC_CFG_PATH is set to a path which contains core.yaml
The FABRIC_CFG_PATH is set (by default) to /etc/hyperledger/fabric inside the Peer Docker container (and that path contains an auto-generated core.yaml file), however, I'm guessing that since the channel create command is run on the host, not the docker container, this is why the core config is not found?
(However, according to this post, I shouldn't need a local core.yaml file?)
2. Try running channel create commands from inside Docker container (with existing, auto-generated core.yaml file).
I exec'd into the Peer Docker container and ran my create Channel Tx and Create Channel commands directly from the docker container. However, this fails because the docker container does not have the fabric binaries loaded. (I tried doing a read-only bind mount of my fabric-samples/bin to a directory in the docker container's /bin path, but ran into problems and didn't explore it further.)
3. Create local core.yaml file, set FABRIC_CFG_PATH=$PWD and run channel create commands locally.
Resulting error:
Error: failed to create deliver client for orderer: orderer client failed to connect to oem.scm.cloudns.asia:1050: failed to create new connection: context deadline exceeded
This seems to be because the host:port mapping only has significance inside the docker network, NOT on the host machine. (I can successfully ping the host:port and :port from inside my docker Peer container.)
Update: container log for the orderer shows this error:
2020-09-16 19:14:52.165 UTC [core.comm] ServerHandshake -> ERRO 53e TLS handshake failed with error remote error: tls: bad certificate server=Orderer remoteaddress=10.0.0.3:54110
For a similar problem, Gari Singh said that the TLS server certificate SANS was missing the correct address. I tried adding 10.0.0.3 to my docker-compose file for the CA and restarting, but still got the same error.
4. Same as #3, execept time bind mount the local directory containing core.yaml to the Peer Docker container, and try to start channel from host.
When I tried this approach, I ran into problems because the msp path in the core.yaml file either points to the path on my host, or the corresponding docker container internal msp path.
If I point to the Docker internal msp path, then the Channel start command fails (since the command is run from the host). If I point to the Host msp path, the Peer Docker container fails to start because it can't find the path from inside the container.
Based on everything I've tried, I want to know what is the "correct" approach/method/setup for creating a channel in Hyperledger Fabric?
Did you try executing first of all the starting of the network following the guide you mentioned above?
I think that the error is related to a change on the configuration and execution of the steps for starting up a network. Did you created the genesis-block correctly (line 85 of you docker-compose file? Did you verify the logs? The error seems to be related to a missconfiguration of the FABRIC_CFG_PATH when you create the material for you network.
I think the answer that I may have been looking for is to use the Fabric CLI (Docker Hyperledger/fabric-tools image).
This approach seems to solve all of the issues I was having with approaches #1-4:
The fabric-tools image gives access to fabric binaries
The internal paths referenced in the configtx.yaml and core.yaml files are accessible (since the CLI is mounted as a Docker container), and
the CLI can natively access the host:port of other nodes (since it is part of the same Docker network)
(Note: Now that I am using the CLI, I am getting a different error, although I believe it is related to my configtx Policy, not to the method of creating the channel.)
Since I've seen some setups that do not use the CLI, I assume that this is not the only correct answer. If anyone else has another solution, I am eager to learn more.

ERROR: for azerothcore-wotlk_ac-database_1 Cannot start service ac-database: driver failed programming external connectivity on endpoint

With Docker on Windows, when using docker-compose up , I get error message from console (git bash)
Command used: docker-compose up
Docker compose file: https://github.com/azerothcore/azerothcore-wotlk/blob/master/docker-compose.yml
Results:
ERROR: for azerothcore-wotlk_ac-database_1 Cannot start service ac-database: driver failed programming external connectivity on endpoint azerothcore-wotlk_ac-database_1 (a999876eaab9126abc6635a5d62ab31c5e14fd12439ec2747a42e72fb923a4af): Error starting userland proxy: mkdir /port/tcp:0.0.0.0:3306:tcp:172.19.0.2:3306: input/output error
ERROR: for ac-database Cannot start service ac-database: driver failed programming external connectivity on endpoint azerothcore-wotlk_ac-database_1 (a999876eaab9126abc6635a5d62ab31c5e14fd12439ec2747a42e72fb923a4af): Error starting userland proxy: mkdir /port/tcp:0.0.0.0:3306:tcp:172.19.0.2:3306: input/output error ERROR: Encountered errors while bringing up the project.
I did followed this link,
http://www.azerothcore.org/wiki/Install-with-Docker
and at first got the same message of yours.
The Problem is that the docker version have mysql in it,
and if you have installed mysql in the machine, you must uninstall it first.
(maybe, net stop mysql)
After that, docker-compose up runs well.

Permission Denied on port when starting HDP Sandbox Proxy on Docker (Windows 10)

I am getting the following error when trying to start sandbox-proxy (proxy-deploy.sh) on docker.
Have tried reinstalling, rebooting, checking existing in use ports using netstat -a -n. Nothing helped.
Error response from daemon: driver failed programming external connectivity on endpoint sandbox-proxy (b710798aa75668908d359602541ed4d8a3da4e4b8b2856f5e779453ea296aeef): Error starting userland proxy: Bind for 0.0.0.0:50111: unexpected error Permission denied
Error: failed to start containers: sandbox-proxy
Detailed snapshot of failure
Docker logs attempt as requested
Go to the location where you saved the Docker deployment scripts – refer to Deploy HDP Sandbox as an example. You will notice a new directory sandbox was created.
Edit file sandbox/proxy/proxy-deploy.sh
Modify conflicting port (first in keypair). For example, 6001:6001 to 16001:6001
Save/Exit the File
Run bash script: bash sandbox/proxy/proxy-deploy.sh
Repeat steps for continued port conflicts
More info : https://hortonworks.com/tutorial/sandbox-deployment-and-install-guide/section/3/#port-conflict

Docker is giving error in running RabbitMQ

I have this issue that Docker is giving exception in RabbitMQ and not running the project. It was working fine like two days before
Error Code:
Severity Code Description Project File Line Source Suppression State
Error The DOCKER_REGISTRY variable is not set. Defaulting to a blank string.
Creating network "dockercompose17804906324906542053_default" with the default driver
Building syncserviceexchange
Building webapisyncserviceexchange
Creating dockercompose17804906324906542053_rabbit2_1 ...
Creating elasticsearch ...
Creating mysql1 ...
Creating myadmin ...
Creating dockercompose17804906324906542053_rabbit2_1 ... error
ERROR: for dockercompose17804906324906542053_rabbit2_1 Cannot start service rabbit2: driver failed programming external connectivity on endpoint dockercompose17804906324906542053_rabbit2_1 (5ff7c5b4d0fa9db5bc8b35dc4010c306c0e357a97d1ea912bd9b290fdfa6f8fd): Error starting userland proxy: Bind for 0.0.0.0:5672 failed: port is already allocated
Creating mysql1 ... error
ERROR: for mysql1 Cannot start service db: error while creating mount source path '/host_mnt/g/Flexfone/Imp&Rec/Flexfone/SyncServiceExchange/datadir': mkdir /host_mnt/g: file exists
Creating elasticsearch ... done
Creating myadmin ... done
ERROR: for rabbit2 Cannot start service rabbit2: driver failed programming external connectivity on endpoint dockercompose17804906324906542053_rabbit2_1 (5ff7c5b4d0fa9db5bc8b35dc4010c306c0e357a97d1ea912bd9b290fdfa6f8fd): Error starting userland proxy: Bind for 0.0.0.0:5672 failed: port is already allocated
ERROR: for db Cannot start service db: error while creating mount source path '/host_mnt/g/Flexfone/Imp&Rec/Flexfone/SyncServiceExchange/datadir': mkdir /host_mnt/g: file exists
Encountered errors while bringing up the project..
it says the port already used, you have to stop the previous container first:
Bind for 0.0.0.0:5672 failed: port is already allocated
you can use docker-compose down if you use docker-compose it will also stop all the services in that compose
or
use docker stop <container_name> to stop specific container
and if you updated the image remove it first and rebuild the image and rerun the containers, do any clean ups necessary but in this case you have to run it and configure it manually in the command line.
to see what docker containers running check docker ps .. if the port is not used there then another process took it in that machine OS check what could have done that

saving docker log files with volumes produces permission denied

I am trying to test saving log files of docker containers in playing in this site which gives you a linux root shell with docker installed. I'v used solution provided here:
docker run -ti -v /dev/log:/root/data --name zizimongodb mongo
This is what I got in the console:
docker: Error response from daemon: oci runtime error: container_linux.go:262: starting container process caused "process_linux.go:339: container init caused \"rootfs_linux.go:57: mounting \\\"/dev/log\\\" to rootfs \\\"/graph/overlay2/7f1eb83902e3688c0a1204c2fe8dfd8fbf43e1093bc578e4c41028e8b03e4b38/merged\\\" at \\\"/graph/overlay2/7f1eb83902e3688c0a1204c2fe8dfd8fbf43e1093bc578e4c41028e8b03e4b38/merged/root/data\\\" caused \\\"permission denied\\\"\"".
But the container has started:
$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
8adaa75ba6f7 mongo "docker-entrypoint..." 2 minutes ago Created zizimongodb
docker logs -f zizimongodb returns nothing. When I stop the container, nothing is saved in the /root/data. Any idea how I can correctly save all logs?
Since you are using the official mongo image from DockerHub, it is worth pointing out that this official image (like many--or all?--of the official images) does not send log output to their default log locations that you might expect if you download a Linux distro version of the same software.
Instead, most software that is capable of being told where to log, are forced to log to stdout/stderr so that docker log plugins and the docker log command itself work properly.
For the mongodb case you can see this somewhat complicated code here that tells the mongodb process to use the /proc filesystem file descriptor that maps to "stdout", as long as it is writeable when the container is started. Because of some bugs this is more complicated that other Dockerfile customization of log output (you can read more if interested at the links in the comments).
I think a more reasonable way to try and do some form of log consolidation or collection is to read about docker log drivers and see if any of those options works for you. For example, if you like journald there is a driver which will take all container logs and pass them to journald on the host.

Resources