I have pulled hyperledger/composer-rest-server docker image , Now if i wanted to run this docker image then on which port should i expose ? Like mentioned below.
docker run --name composer-rest-server --publish XXXX:YYYY --detach hyperledger/composer-rest-server
Here please tell me what should i replace for XXXX & YYYY ?
I run the rest server in a container using a command as follows:
docker run -d \
-e COMPOSER_CARD="admin#test-net" \
-e COMPOSER_NAMESPACES="never" \
-v ~/.composer:/home/composer/.composer \
--name rest -p 3000:3000 \
hyperledger/composer-rest-server
For the Published Port, the first element is the Port that will be used on the Docker Host, and the second is the Port it is forwarded to inside the container. (The Port inside the container will always be 3000 by default and is more complex to change.)
I'm passing 2 environment variables into the Container which the REST server will recognise - Namespaces just keeps the endpoints simple, but the COMPOSER_CARD is essential for the REST server to start properly.
I'm also sharing a volume between the Docker Host and the Container which is where the Cards are stored, so that the REST server can find the COMPOSER_CARD referred to in the environment variable.
Warning: If you are trying to test the REST server with the Development Fabric you need to understand the IP Network and Addressing of the Docker containers - by default the Composer Business Network Cards will be built using localhost as the address of the Fabric servers, but you can't use localhost in the REST server container as that will redirect inside the container and fail to find the Fabric.
There is a tutorial in the Composer Docs that is focused on Multi-User authentication, but it does also cover the networking aspects of using the REST Server Container. There is general information about the REST server here.
Related
I'm a beginner in this docker world and as it is very suffering to set all these 'localhost' thingy with apache and stuff, it's the same with docker.
I don't know if it's me because but i tried with the help of other post to solve my problem but after several hour i give up and i ask for your help, because some post are just not comprehensible for me ( post that includes bridges stuff NAT iptables docker-machine, etc )
After several hour i'm just simply trying to access apache website on localhost:5000 on windows who is launched with service apache2 start within a docker, and if i do w3m localhost in this docker i can see it running.
But when i'm trying to access it with a browser no response.
I also tried this command :
docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' bce97a49b68c
172.17.0.2
The adress with :5000 don't have an access, i even it put in the hosts file. No success.
If someone has the last solution for this problem, it's seems there are plenty and everything seems to be so simple in blog of article (i even tried something with docker-composer, it deleted docker i had to reinstall the whole thing)
I'm a little unsure what you're asking, but it seems like you may need to expose your ports. When running something in Docker, it runs in its own little box unconnected to the outside world - the rest of your machine. If you want to connect ports - say to access a web server running inside a Docker container, you need to use the -p or --publish option when running your Docker container. There are similar commands for mounting drives and such.
Here's an example from the database I run locally in Docker:
docker run \
--publish=7474:7474 \
--volume=/home/me/logs:/logs \
--env=NEO4J_AUTH=none \
neo4j:4.2.
This says:
Allow the outside system to access port 7474 inside the Docker container from the port 7474 outside the docker container
Mount the outside system's /home/me/logs folder as /logs inside the Docker container
Set the environment variable NEO4J_AUTH inside the Docker container to the value none
I tried delploying kie workbench using docker command docker run -p 8080:8080 -p 8001:8001 -d --name drools-wb jboss/business-central-workbench-showcase:latest and kieserver using the docker command docker run -p 8180:8080 -d --name kie-server --link drools-wb:kie-wb jboss/kie-server-showcase:latest. I deployed a sample drl file to kie server using the business central. The screen image after deployment is as shown below.
The remote server is given as 172.17.0.3:8080. But when I try to test the deployment file using Postman the server is not responding.The requests are getting timed out.The two endpoint services I tried to access are http://172.17.0.3:8080/kie-server/services/rest/server/and http://172.17.0.3:8080/kie-server/services/rest/server/DemoRule_1.0.0-SNAPSHOT. First of all Iam not understanding why is it getting deployed in some remote server and not localhost. Secondly why is it not getting accessible. I even tried the kie server container endpoint http://localhost:8180/kie-server/services/rest/server/. But none of this works. Can someone help me understand the problem.
I found the answer for myself. The service was available at http://localhost:8180/kie-server/services/rest/server/containers/instances/DemoRule_1.0.0-SNAPSHOT. That's were the actual controller was available. Port 8080 was endpoint for wildfly server. The IP 172.17.0.3:8080 was related to docker container. It had nothing do with the controllers.
I'm trying to stand up a temporary NiFi server to support a proof of concept demo for a customer. For these types of short lived servers I like to use Docker when possible. I'm able to get the NiFi container up and running with out any issues but I can't figure out how to access its UI from the browser on a remote host. I've tried the following docker run variations:
docker run --name nifi \
-p 8080:8080 \
-d \
apache/nifi:latest
docker run --name nifi \
-p 8080:8080 \
-e NIFI_WEB_HTTP_PORT='8080' \
-d \
apache/nifi:latest
docker run --name nifi \
-p 8080:8080 \
-e NIFI_WEB_HTTP_HOST=${hostname-here} \
-e NIFI_WEB_HTTP_PORT='8080' \
-d \
apache/nifi:latest
My NiFi version is 1.8.0. I'm fairly certain that my problems are related to the host-headers blocker feature added to version 1.5.0. I've seen a few questions similar to mine but no solutions.
Is it possible to access the NiFi UI from a remote host after version 1.5.0?
Can host-headers blocker be disabled for a non-prod demo?
Would a non-Docker install on my server present the same host-headers blocker issues?
Should a use 1.4 for my demo and save myself a headache?
While there was a bug around 1.5.0 surrounding the host headers in Docker that issue was resolved and, additionally, the host header check now is only enforced for secured environments (you will see a note about this in the logs on container startup).
The commands you provide in your question are all workable for accessing NiFi on the associated mapped port in each example and I have verified this in 1.6.0, 1.7.0, and 1.8.0. You may want to evaluate the network security settings of your remote machine in question (cloud provided instances, for example. will typically require explicit security groups exposing ports).
I had the same issue, I was not able to access the web ui remotely. Turns out the firewall issue. Disabling the firewalld & adding a custom firewall rule to allow docker network with port should solve the issue.
The docker-compose.yml is shared here
In short - Can I run Elasticsearch & Dropwizard app in separate docker containers and allow them to see each other?
I am running Elasticsearch 6.2.2 from Docker (on mac). using the command..
docker run -p 9200:9200 -p 9300:9300 -e "network.host=0.0.0.0" \
-e "http.port=9200" -e "discovery.type=single-node" \
docker.elastic.co/elasticsearch/elasticsearch:6.2.2
I can access Elasticsearch (POST & GET) fine using Postman directly on mac e.g
localhost:9200/testindex/_search
However when running a Dropwizard application from a different docker image which accesses the docker Elasticsearch instance, I get connection refused using same host and port (localhost 9200).
I have no problems at all when running the Dropwizard app direct from an IDE, only when its running from a docker image and accessing ES from a different image.
docker image -p 8080:8080 -p 8081:8081 testapp
Has anyone else had similar issues or solved this in the past?
I'm assuming it 'network' related and that connecting to localhost from one docker image will not be map to the other docker image
The issue you are facing is in the url you pass to the dropwizard container. As a container by default has its own networking, a value of localhost means the dropwizard container itself, not what you see as your local host from outside the container.
Please have a look at docker networking, how you can link two containers by name. I would suggest to check out docker-compose for multi-container setups on a local machine.
What would also work (but is not good practice) is to pass the dropwizard container the ip of your machine as elasticsearch host because you created a port mapping from your host into the elasticsearch container. But better have a look at compose to do it as it is supposed to be done.
For details how to use compose please have a look at this answer with a similar example.
I have multiple docker containers on a single machine. On each container is running a process and a web server that provides an API for the process.
My question is, how can I access the API from my browser when the default port is 80? To be able to access the web server inside docker container I do the following:
sudo docker run -p 80:80 -t -i <yourname>/<imagename>
This way I can do from my computers terminal:
curl http://hostIP:80/foobar
But how to handle this with multiple containers and multiple web servers?
You can either expose multiple ports, e.g.
docker run -p 8080:80 -t -i <yourname>/<imagename>
docker run -p 8081:80 -t -i <yourname1>/<imagename1>
or put a proxy ( nginx, apache, varnish, etc.) in front of your API containers.
Update:
The easiest way to do a proxy would be to link it to the API containers, e.g. having apache config
RewriteRule ^api1/(.*)$ http://api1/$1 [proxy]
RewriteRule ^api2/(.*)$ http://api2/$1 [proxy]
you may run your containers like this:
docker run --name api1 <yourname>/<imagename>
docker run --name api2 <yourname1>/<imagename1>
docker run --link api1:api1 --link api2:api2 -p 80:80 <my_proxy_container>
This might be somewhat cumbersome though if you need to restart the api containers as the proxy container would have to be restarted either (links are fairly static at docker yet). If this becomes a problem, you might look at approaches like fig or autoupdated proxy configuration : http://jasonwilder.com/blog/2014/03/25/automated-nginx-reverse-proxy-for-docker/ . The later link also shows proxying with nginx.
Update II:
In a more modern versions of docker it is possible to use user defined network instead of the links shown above to overcome some of the inconveniences of the deprecated link mechanism.
Only a single process is allowed to be bound to a port at a time. So running multiple containers means each will be exposed on a different port number. Docker can do this automatically for you by using the "-P" attribute.
sudo docker run -P -t -i <yourname>/<imagename>
You can use the "docker port" and "docker inspect" commands to see the actual port number allocated to each container.