I have installed UCP master on a node and able to login using admin/orca
The UCP admin fingerprint is as below
23:B9:9E:08:B5:67:73:0A:C7:17:07:76:56:B2:F3:D2:73:CE:B5:74
Now, I am trying to configure UCP replica on a separate node using the following command
docker run --rm -i --env UCP_ADMIN_USER=admin --env UCP_ADMIN_PASSWORD=orca -v /var/run/docker.sock:/var/run/docker.sock --name ucp docker/ucp join --replica --url https://10.211.144.177 --san r1.ucp.abc.com --fingerprint 23:B9:9E:08:B5:67:73:0A:C7:17:07:76:56:B2:F3:D2:73:CE:B5:74
After I run the above command, I am getting the following error:
ERRO[0102] Orca didn't come up within 1m0s. Run "docker logs ucp-controller" for more details
FATA[0102] Unable to connect to system
Please help me resolve the error
Related
I'm new Elastic Stack. I've been able to install Elasticsearch and Kibana via Docker using the instructions on elastic.co. However, I'm having some difficulty installing filebeats using the directions on elastic.co. After starting Elasticsearch and Kibana, when I run:
docker run docker.elastic.co/beats/filebeat:7.13.0 setup -E setup.kibana.host=kibana:5601 -E output.elasticsearch.hosts=["localhost:9200"]
I get the following output:
Exiting: couldn't connect to any of the configured Elasticsearch hosts. Errors: [error connecting to Elasticsearch at http://localhost:9200: Get "http://localhost:9200": dial tcp [::1]:9200: connect: cannot assign requested address]
This is with a docker setup. Any guidance to fixing this would be great. Thanks.
if you were following instructions from tutorial You can see, that it should use the same network.
So instead of
docker run docker.elastic.co/beats/filebeat:7.13.0 setup -E setup.kibana.host=kibana:5601 -E output.elasticsearch.hosts=["localhost:9200"]
should be
docker run --net {network_name} docker.elastic.co/beats/filebeat:7.13.0 setup -E setup.kibana.host=kibana:5601 -E output.elasticsearch.hosts=["localhost:9200"]
Check Your elasticsearch container network with following command
docker inspect -f '{{.NetworkSettings.Networks}}' {es-container-name}
If You try to run Kibana+Elastic+Filebeat on Windows, I would suggest writing a dockerfile (or dockercompose) with Your own fileabeat.yml.
Of course, if You run Your elasticserach non-containered, You should use host network, but it's another story.
I am trying to do
docker run -d --name artifactory -v D://JFrog/artifactory/var/:/var/opt/jfrog/artifactory -p 8281:8081 docker.bintray.io/jfrog/artifactory-oss
By following this tutorial: https://www.jfrog.com/confluence/display/JFROG/Installing+Artifactory#InstallingArtifactory-DockerComposeInstallation
However, even after setting permissions and adding localhost as shared.node.ip in $JFROG_HOME/artifactory/var/etc/system.yaml
The logs are as follows from docker console: https://pastebin.com/dQAckECE
Any leads/help will be appreciated. TIA
I downloaded 2 versions of neo4j on Ubuntu 18.04 which are "neo4j-community-3.5.12" and "neo4j-community-3.5.8"
I run 3.5.8 with default settings I can see it from the web. http://localhost:7474/
For 3.5.12 I changed conf/neo4j.conf file and set some other port numbers for not to conflict with the default ones.
3.5.8 version runs fine on :7474. When I start 3.5.12, the logs says it is running but when I check from browser it is not running. I tried 2 different port settings, none worked. Below is the log file.
Why it is not running?
I see that many people recommended using docker. I also tried that.
I set up docker a container with command
sudo docker run --name db1 -p7474:7474 -p7687:7687 -d -v /db1/data:/data -v /db1/logs:/logs -v /db1/conf:/conf --env NEO4J_AUTH=none neo4j
here I have an existing /d1/data/databases/graph.db folder. When I go to localhost:7474 it is fine it shows me the existing database.
I set up another docker container with command
sudo docker run --name db2 -p3001:7474 -p3002:7473 -p3003:7687 -d -v /db2/data:/data -v /db2/logs:/logs -v /db2/conf:/conf --env NEO4J_AUTH=none neo4j
here I expect to see an EMPTY database but I see the already existing database again. When I go to the data folder inside db2. I see that it created some files here. WHY do I see the same database?
Also note that when I go to see the databases, headers of the web pages showing they are using the same bolt port?
can I copy the neo4j image and use different images to generate containers? Does that help?
I recognized that multiple databases are running and active but somehow I'm not able to reach the second one through a browser.
Considering the docker commands-
cmd1: sudo docker run --name db1 -p7474:7474 -p7687:7687 -d -v /db1/data:/data -v /db1/logs:/logs -v /db1/conf:/conf --env NEO4J_AUTH=none neo4j
cmd2: sudo docker run --name db2 -p3001:7474 -p3002:7473 -p3003:7687 -d -v /db2/data:/data -v /db2/logs:/logs -v /db2/conf:/conf --env NEO4J_AUTH=none neo4j
The container ports are defaults exposed as the same host port for db1 instance. Whereas for db2 instance series 3xxx has been used.
To browse the UI provided by neo4j, you can use either 7474 or 3001 port which is mapped to 7474 container port.
Neo4j browser uses defaults (from neo4j.conf) to connect to neo4j server. The default settings are as
bolt://<machineip>:7687, where db1 instance has already exposed the container port to 7687 host port.
A running instance is found on 7687 port which initiates a WebSocket connection for db1 and db2.
How to connect to an appropriate instance?
Use: :server disconnect and :server connect with the appropriate bolt://<machineip>:port connection string
Map db1 instance bolt container port to different host port (i.e. other than 7687)
As no defaults will be available
(Preferred), set the same hostport:containerport combination e.g.
cmd2: sudo docker run --name db2 -p3001:7474 -p3002:7473 -p3003:3003-d -v /db2/data:/data -v /db2/logs:/logs -v /db2/conf:/conf --env NEO4J_AUTH=none neo4j
in this case, a Volume has to be mapped to provide neo4j.conf with updated values as dbms.connector.bolt.listen_address=:3003
In case anybody still needs it: Here is how to run two neo4j databases neo4j_01 and neo4j_02 in two different docker containers, saving the data in different directories and accessing them on different ports.
docker container 1: neo4j_01
docker run \
--name neo4j_01 \
-p1474:7474 -p1687:7687 \
-d \
-v $HOME/neo4j_01/neo4j/data:/data \
-v $HOME/neo4j_01/neo4j/logs:/logs \
-v $HOME/neo4j_01/neo4j/import:/var/lib/neo4j/import \
-v $HOME/neo4j_01/neo4j/plugins:/plugins \
--env NEO4J_AUTH=username/enterpasswordhere \
neo4j:latest
docker container 2: neo4j_02
docker run \
--name neo4j_02 \
-p2474:7474 -p2687:7687 \
-d \
-v $HOME/neo4j_02/neo4j/data:/data \
-v $HOME/neo4j_02/neo4j/logs:/logs \
-v $HOME/neo4j_02/neo4j/import:/var/lib/neo4j/import \
-v $HOME/neo4j_02/neo4j/plugins:/plugins \
--env NEO4J_AUTH=username/enterpasswordhere \
neo4j:latest
After executing the code above e.g. neo4j_01 can be reached on port 1474 (when logging in you need to change the bolt port to 1687 in the first line and then enter username and password in second and third line)
You can stop a container with docker kill neo4j_01 and restart it with docker start neo4j_01. Data will still be there. It is saved in $HOME/neo4j_01/neo4j/data.
Doing it like this, I did not encounter any problems with ports/ accessing the wrong database etc.
After a lot of effort, my solution is not to use docker.
Go and download a community server from here. https://neo4j.com/download-center/#community. It will give you a compressed file. Extract it. You will have a folder named like neo4j-community-3.5.14. Make a copy of THAT FOLDER. For each server instance, make a copy.
Inside the folder, there is a conf folder which has a file named neo4j.conf. Open that file. By changing some settings inside this folder, you can run many neo4j servers. Change the below settings
To accept non-local connections, uncomment this line:
dbms.connectors.default_listen_address=0.0.0.0
change some port numbers so that they won't intersect with already used ones
dbms.connector.bolt.listen_address=:3003
dbms.connector.https.listen_address=:3002
dbms.connector.http.listen_address=:3001
I read the blog: https://www.rubix.nl/blogs/tibco-monitoring-docker-how-create-instantiate-and-start-tibco-businessworks-container-edition
The blog entry is very interesting. Unfortunately, it does not work for me. My Tibco service does not connect to the monitoring.
Here is some data:
Bwce Version: 2.3
Bwce Mon Version: 2.4
Log entry from my Tibcoservice: Failed to register with Monitoring
application - response code [400] and Reason Phrase [Bad Request]
Log entry from my bwce-mon:
INFO:{"host":"172.17.0.4","port":"8090","instanceName":"6866a20e7bd6","appName":"6866a20e7bd6"
WARN : Container is not running for (host, port):(172.17.0.4, 8090). Please register running container
Docker run command for Tibcoservice: docker run -d -p 7575:7575 --link bwceadmin --name helloworld -e EMS_URL=tcp://ubdev-ws-003:7223 -e EMS_QUEUE=docker.queue -e BW_APP_MONITORING_CONFIG='{"url":"http://bwceadmin:8080"}'
helloworld:1.0.0
Docker run command for bwce-mon: docker run -p 8080:8080 -e persistence_DB="dockerpostgres" -e
DB_URL="postgres://postgres:#172.17.0.2:5432/postgres" -e
PERSISTENCE_TYPE=postgres --name bwceadmin bwcemon:2.4.0
Do you have any idea why this did not work for me?
I didn't write the blog post, but I think your issue might be in the configuration of the property "BW_APP_MONITORING_CONFIG".
Can you check if you can access the URL http://bwceadmin:8080? If you can't access that, the issue is most likely to do with the configuration of that property.
To find the setting for that URL, you'll need to know the IP address of the container running your app:
docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' <your container name>
After getting the IP address (like 10.100.22.1), you can start a new BWCE app and add a property for the monitoring URL:
BW_APP_MONITORING_CONFIG='{"url":"http://10.100.22.1:8080"}'
I have setup a fabric network with 2 peers with couch db, 1 orderer, 1 ca. Now I want to run composer-playground in a docker container and I'm trying to run it with the following command:
docker run --network composer_default --name composer-playground -v ~/.composer:/home/composer/.composer --publish 8080:8080 --detach hyperledger/composer-playground
It launches the container, and I can see the PeerAdmin card as well as my network admin card, but when I try to connect with the network admin card, It keeps connecting with message "Please Wait: Connecting to Business Network avocado-network
using connection profile hlfv1" and after sometime, It throws REQUEST_TIMEOUT error.
Has anyone faced this issue, If yes, please enlighten me.
ts likely its because your connection profile has 'localhost' definitions (and therefore the containers are not resolvable, when trying to contact the other docker containers from inside your 'playground' container). Suggest to see the sed sequence here -> hyperledger.github.io/composer/latest/tutorials/… (Step 9) that changes the connection.json (this assumes a 'dev' environment setup, use as appropriate for your env etc
the following 'one-liner' does the job for the localhost based Composer Dev environment setup: (in this case, my existing business network card is admin#trade-network and use this to
sed -e 's/localhost:7051/peer0.org1.example.com:7051/' -e 's/localhost:7053/peer0.org1.example.com:7053/' -e 's/localhost:7054/ca.org1.example.com:7054/' -e 's/localhost:7050/orderer.example.com:7050/' < $HOME/.composer/cards/admin#trade-network/connection.json > /tmp/connection.json && cp -p /tmp/connection.json $HOME/.composer/cards/admin#trade-network/