When I used ngrok.exe , I can use this command for setting webhook
fetch("http://127.0.0.1:4040/api/tunnels/) to get this page
I switch to ngrok docker
So, I run ngrok docker by using
docker run --net=host -it -p 127.0.0.1:4040:4040 -e NGROK_AUTHTOKEN=29yEfUG2cgRDobuv15dKkY65ho2_7BRb72Bog882rfDZTx4ZE ngrok/ngrok:latest http 7001
output after run
but I cannot access this page What should I do for setting docker ? pls help
I found the answer. change --net = bridge and map 4040:4040. Use url localhost:4040/api/tunnel to access data
Related
I have an application that extracts the client IP making the request. When I run the application directly on a server, it works and I could get the IP.
But when I run it within a docker containing by executing this command:
docker run --rm -d -p 4300:4300 image
All of a sudden the client IP being reported is now 172.17.0.1.
Googling around I see suggestion to pass in --net=host but doing this:
docker run --rm --net=host -p 4300:4300 image
now leads to the application not being reachable. For some strange reason it looks like the application is no longer available at the specified port.
This also does not work even when I drop the -p 4300:4300 as I got a message in the console that it is not needed when --net=host is used. That is:
docker run --rm --net=host -p image
Any suggestions on how to get this done? That is how to get the client IP from within a web service running within a docker container? I am running docker on a mac. Don't know if this has anything to do with the problem.
If you use the net=host configuration, you don't need the -p <host>:<container> setting, since this is only used to port forward when you use the bridge network configuration.
Just drop this flag and browse to whatever port your application listens on. It should work.
I built a container with a react app within it.
I start the app.. docker run -p 3000:3000 blarg3/node
I can bash into the container and curl localhost:3000 and it returns the front page of my site.
When I go to the IP and port http://172.17.0.2:3000/. Nothing is returned.
By default "docker run" binds port only to local interface. If you want to bind it to another interface you need to specify it's IP address like this:
docker run -p 172.17.0.2:3000:3000 blarg3/node
You can read more about docker networking options here: https://docs.docker.com/engine/userguide/networking/default_network/dockerlinks/#connect-using-network-port-mapping
I installed docker and with tensorflow image I am unable to open in browser with jupyter notebook.
What am I missing??
command used: docker run -it -v /home/$USER_NAME/tf_files:/tf_files gcr.io/tensorflow/tensorflow
where "gcr.io/tensorflow/tensorflow" is the tensorflow image and "/home/surya" is $HOME.
in terminal
output in browser
PS: docker installation is correct as "docker run hello-world" gives required message.
You missed to bind some ports. The official documentation of tensorflow provides the exposed ports with this command:
docker run -it -p 8888:8888 -v /home/surya/tf_files:/tf_files gcr.io/tensorflow/tensorflow
where -p 8888:8888 means: link the port 8888 of my local machine with the service in the container, which is also 8888. Then you can access the service at http://localhost:8888
Why do I have to map a port?
Your container shows the following:
Copy/paste this URL into your browser when you connect for the first time,
to login with a token:
http://localhost:8888/?token=1b3ec72ff1ed67f77a09beaee1dc4b9ad4e7aee26401b6f0
which means that you have to connect to the running process inside the container with the port 8888. To make the port of the container accessible from your local machine, you have to add -p 8888:8888 to your command. Then accessing the URL given to you from your container makes it possible to access the container's notebook via your local browser.
I am running the cloudera docker quickstart image (on windows) as explained on the this page.
I run it using:
docker run --hostname=quickstart.cloudera --privileged=true -t -i -p 7180:7180 -p 9080:9080 cloudera/quickstart:latest
It runs fine, I am able to run cloudera manager and access it using the url http://192.168.99.100:7180. So far so good. I also use tomcat with a simple app on localhost:9080 inside this same container. How do I access it on my host? I tried using http://192.168.99.100:9080 but it does not work.
Update: I fixed it by using the vm ip i.e. 192.168.99.100 instead of localhost for the server. Now it works. Thanks.
I have multiple docker containers on a single machine. On each container is running a process and a web server that provides an API for the process.
My question is, how can I access the API from my browser when the default port is 80? To be able to access the web server inside docker container I do the following:
sudo docker run -p 80:80 -t -i <yourname>/<imagename>
This way I can do from my computers terminal:
curl http://hostIP:80/foobar
But how to handle this with multiple containers and multiple web servers?
You can either expose multiple ports, e.g.
docker run -p 8080:80 -t -i <yourname>/<imagename>
docker run -p 8081:80 -t -i <yourname1>/<imagename1>
or put a proxy ( nginx, apache, varnish, etc.) in front of your API containers.
Update:
The easiest way to do a proxy would be to link it to the API containers, e.g. having apache config
RewriteRule ^api1/(.*)$ http://api1/$1 [proxy]
RewriteRule ^api2/(.*)$ http://api2/$1 [proxy]
you may run your containers like this:
docker run --name api1 <yourname>/<imagename>
docker run --name api2 <yourname1>/<imagename1>
docker run --link api1:api1 --link api2:api2 -p 80:80 <my_proxy_container>
This might be somewhat cumbersome though if you need to restart the api containers as the proxy container would have to be restarted either (links are fairly static at docker yet). If this becomes a problem, you might look at approaches like fig or autoupdated proxy configuration : http://jasonwilder.com/blog/2014/03/25/automated-nginx-reverse-proxy-for-docker/ . The later link also shows proxying with nginx.
Update II:
In a more modern versions of docker it is possible to use user defined network instead of the links shown above to overcome some of the inconveniences of the deprecated link mechanism.
Only a single process is allowed to be bound to a port at a time. So running multiple containers means each will be exposed on a different port number. Docker can do this automatically for you by using the "-P" attribute.
sudo docker run -P -t -i <yourname>/<imagename>
You can use the "docker port" and "docker inspect" commands to see the actual port number allocated to each container.