After installing Odoo's image using Docker in an ubuntu server, I am unable to use Odoo on port 80 instead of 8069. I have tried multiple approaches without success including:
Installing nginx in the server and use as proxy to redirect 8069 to 80
Editing odoo.conf file and added xmlrpc_port = 80 so it runs on port 80
Pasting iptables -t nat -A PREROUTING -p tcp --dport 80 -j REDIRECT --to-port 8069 onto rc.local
Running odoo from start up in 80 port
Has anyone been able to figure this out?
if you're running Odoo inside docker container, you can just map port 80 on the host to port 8069 inside the docker container using -p option:
$ docker run -d -p 80:8069 odoo:12.0
to test this you should run netstat command line.
$ sudo netstat -antop | grep LISTEN | grep 80
you should see something like this:
tcp6 0 0 :::80 :::* LISTEN 971/docker-proxy
if you still have problems, then you should examine port security settings (e.g. security groups on AWS platform)
Related
I'm trying to run an application in Docker on an EC2 instance. It is two separate processes. I'm able to access the ports for process 1, but not process 2.
Process 1 listens on the following ports:
2008
8080
Process two listens on the these ports:
2021
8084
The security rules allow for all traffic to all ports from all origins:
Netstat shows both ports on process 2 are listening
netstat -an | grep 2021
tcp6 0 0 :::2021 :::* LISTEN
netstat -an | grep 8084
tcp6 0 0 :::8084 :::* LISTEN
The docker command opens all of the above ports:
docker run -ti --privileged=true -v /sys/fs/cgroup:/sys/fs/cgroup:ro -p 8080:8080 -p 2008:2008 -p 8084:8084 -p 2021:2021 myname/image_name /usr/sbin/init
There is no firewall process running.
Yet, a zenmap scan shows only ports 2008 and 8080 of the above four are listening - 2021 and 8084 don't show up.
Any ideas why this would be? I can't think of what else to look for.
I'm trying to follow the beginner tutorial at training.play-with-docker.com. At Task 2, step 6, I do the following and get the error as below:
PS C:\Users\david.zemens\Source\Repos\linux_tweet_app> docker container run --detach --publish 80:80 --name linux_tweet_app $DOCKERID/linux_tweet_app:1.0
d39667ed1deafc382890f312507ae535c3ab2804907d4ae495caaed1f9c2b2e1
C:\Program Files\Docker\Docker\Resources\bin\docker.exe: Error response from daemon: driver failed programming external connectivity on endpoint linux_tweet_app (a819223be5469f4e727daefaff3e82eb68eb0674e4a46ee1a32e703ce4bd384d): Error starting userland proxy: listen tcp 0.0.0.0:80: bind: An attempt was made to access a socket in a way forbidden by its access permissions.
I am using Docker Desktop on a Win10 machine locally. I've tried resetting Docker as suggested here. Error persists. Since something else must be using port 80, I should be able to avoid the error by using a different port, right?
PS C:\Users\david.zemens\Source\Repos\linux_tweet_app> docker container run --detach --publish 1337:1337 --name linux_tweet_app $DOCKERID/linux_tweet_app:1.0
Right! docker ps now confirms the container is running:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
b700df12c2d1 dzemens/linux_tweet_app:1.0 "nginx -g 'daemon of…" About a minute ago Up About a minute 80/tcp, 443/tcp, 0.0.0.0:1337->1337/tcp linux_tweet_app
But when I try to view the webpage that the tutorial sends me to, I get an error in the browser.
I'm not sure how the link is dynamically generated but it looks something like this:
http://ip172-18-0-32-blsfgt2d7o0g00epuqi0-80.direct.labs.play-with-docker.com/
Browser error as below:
The proxy could not connect to the destination in time.
URL: http://ip172-18-0-32-blsfgt2d7o0g00epuqi0-80.direct.labs.play-with-docker.com/
Failure Description: :errno: 104 - 'Connection reset by peer' on socketfd -1:server state 7:state 9:Application response 502 cannotconnect
Another highly-upvoted answer suggests I need to "disable Windows 10 fast startup" -- I have not tried this yet, mainly because I'm not sure what the full repercussions are with that setting.
Is there something stupidly obvious that I'm overlooking here? Shouldn't I be able to run this on different ports? If not, why not? If I have to use 80:80, but System is already using that port, won't I have some further problems if I try to kill that pid?
PS C:\Users\david.zemens\Source\Repos\linux_tweet_app> netstat -a -n -o | findstr :80 | findstr LISTENING
TCP 0.0.0.0:80 0.0.0.0:0 LISTENING 4
TCP 0.0.0.0:8003 0.0.0.0:0 LISTENING 4
TCP 0.0.0.0:8080 0.0.0.0:0 LISTENING 1348
TCP 0.0.0.0:8081 0.0.0.0:0 LISTENING 4688
TCP 127.0.0.1:8080 0.0.0.0:0 LISTENING 2016
TCP 127.0.0.1:8082 0.0.0.0:0 LISTENING 28536
TCP [::]:80 [::]:0 LISTENING 4
TCP [::]:8003 [::]:0 LISTENING 4
TCP [::]:8080 [::]:0 LISTENING 1348
TCP [::]:8081 [::]:0 LISTENING 4688
I made a small change in the Dockerfile changing EXPOSE 80 443 to EXPOSE 1337 443 and I'm now able to view my app by navigating to localhost:1337 in my browser. I think that will get me through the next steps in the training module, but still curious if I'm doing something wrong.
This seems to work regardless of the change in Dockerfile (I've removed and republished after changing Dockerfile).
PS C:\Users\david.zemens\Source\Repos\linux_tweet_app> docker container run --detach --publish 1337:80 --name linux_tweet_app $DOCKERID/linux_tweet_app:1.0
Try this
> net stop winnat
> docker start ...
> net start winnat
A part of the problem is that you're using the wrong mapping. The application uses the port 80, but you're mapping the ports 1337 to 1337.
The correct command should be:
PS C:\Users\david.zemens\Source\Repos\linux_tweet_app> docker container run --detach --publish 1337:80 --name linux_tweet_app $DOCKERID/linux_tweet_app:1.0
It may be because your IIS or some other server is already running on port 80.
Try stop the IIS and it should work.
Reference: https://forums.docker.com/t/error-starting-userland-proxy-listen-tcp-0-0-0-0-bind-an-attempt-was-made-to-access-a-socket-in-a-way-forbidden-by-its-access-permissions/81299/7
I'm trying to run
/usr/bin/docker run --rm -v /var/data/redis:/data -v /var/data/conf/redis.conf:/usr/local/etc/redis/redis.conf --name redis -p 6379:6379 redis:5.0.3-alpine3.9
but I get:
/usr/bin/docker: Error response from daemon: driver failed programming external connectivity on endpoint redis (f16f19b7727a710fb6c96be566dac66ce26282982960d97faa28861c24fcf2fb): Bind for 0.0.0.0:6379 failed: port is already allocated.
When I try to check the ports used with netstat, I get:
[root#artik ~]# netstat -nlpute | grep 6379
tcp6 0 0 :::6379 :::* LISTEN 0 14384 2471/docker-proxy
I have no docker containers running right now.
I don't understand this issue, what should I do ?
[root#artik ~]# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
Steps I had to take to get everything working:
sudo service docker stop
sudo rm /var/lib/docker/network/files/local-kv.db
sudo service docker start
docker system prune
And then try again.
From your netstat output its clear that there is one process holding port 6379
[root#artik ~]# netstat -nlpute | grep 6379
tcp6 0 0 :::6379 :::* LISTEN 0 14384 2471/docker-proxy
docker-proxy processes are created when you do port forwarding in docker run which is true in your case -p 6379:6379.
For more info on docker-proxy check this out.
I suspect that you earlier ran a redis container which used port 6379, but that container was not properly deleted which kept process docker-proxy running and hence you got port is already allocated
Hope this helps.
As DannyMoshe suggested for anyone else.
Try this before you potentially mess up your whole setup::
sudo service docker stop
sudo service docker start
remove the ports - ... in the docker-compose file and let it assign by itself. or change the port mapping in the host from 6379:6379 to 6378:6379 that worked for me. Before doing this you may need to clear already started containers. docker rm -f $(docker ps -a -q)
Docker provides a way to map ports between the container and host.
As per the official documentation its also possible to mention host-ip while port mapping.
-p 192.168.1.100:8080:80 - Map TCP port 80 in the container to port 8080 on the Docker host for connections to host IP 192.168.1.100.
I tried this option to figure out what's the difference with/without the host-ip.
Using just -p 80:80
$ docker run -itd -p 80:80 nginx:alpine
$ curl localhost:80
$ curl 127.0.0.1:80
$ curl 0.0.0.0:80
$ curl 192.168.0.13:80
$ ps -ef | grep docker-proxy
16723 root 0:00 /usr/local/bin/docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 8080 -container-ip 172.17.0.1 -container-port 80
$
All the curl commands return the output.
Using host-ip like -p 192.168.0.13:80:80
$ docker run -itd -p 192.168.0.13:80:80 nginx:alpine
$ curl localhost:80
curl: (7) Failed to connect to localhost port 80: Connection refused
$ curl 127.0.0.1:80
curl: (7) Failed to connect to 127.0.0.1 port 80: Connection refused
$ curl 0.0.0.0:80
curl: (7) Failed to connect to 0.0.0.0 port 80: Connection refused
$ curl 192.168.0.13:80 # return output
$ ps -ef | grep docker-proxy
4914 root 0:00 /usr/local/bin/docker-proxy -proto tcp -host-ip 192.168.0.13 -host-port 80 -container-ip 172.17.0.2 -container-port 80
$
All the curl commands failed except 192.168.0.13:80.
Is there any there any other difference apart for the one I mentioned here.
Wondering when to use host-ip based port mapping. Any use cases?
A docker host may have multiple NICs. In the data center, this may be too segregate traffic, e.g. management, storage, and application/public. On your laptop, this may be for wireless and wired interfaces. There are also virtual NICs for things like loopback (127.0.0.1) and VPN tunnels.
When you do not specify an IP in the port publish command, by default docker will bind to all interfaces on the host. In IPv4, this is commonly notated as 0.0.0.0 which means listen on any interface (and this is why I don't connect to this address because there's no such thing as connecting to any IP). With the IP address specified, you manually specify which interface to use. Why would you want to specify this? Several reasons I can think of:
Listening on only 127.0.0.1 to prevent external access
Listening on 0.0.0.0 to explicitly bind to all IPv4 interfaces (it is possible to change docker's default behavior, so this could be necessary for some).
Listening on one physical NIC, allowing other NICs to be bound by other services on the same port.
Listening on only IPv4 interfaces if the app does not work for IPv6.
While there are lots of possible reasons, other than listening on loopback for security, these use cases are very rare and most users leave docker to listen on all interfaces.
I have a docker image in which I start jstatd -p 1099 and then my Java app. I also expose port 1099 in the dockerfile.
I have deployed this docker image to AWS ElasticBeanstalk and I can see from the EB logs that the port is exposed.
/var/log/docker-ps.log
-------------------------------------
'docker ps' ran at Fri Jun 17 04:23:02 UTC 2016:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d3199a65e216 8b9c53bb10b6 "/app/run.sh" 5 minutes ago Up 5 minutes 1099/tcp, 8080/tcp jolly_carson
I would now like to profile the app using VisualVM. How can I find the correct ip to connect to? Attempts to telnet to the app's domain name on port 1099 time out.
The container's port is not bound to the instance's port, which is good because you don't want to expose your debugging interface publicly. The IP address of the container can be found with:
$ sudo docker ps
$ sudo docker inspect --format '{{ .NetworkSettings.IPAddress }}' <container_id>
Start an SSH tunnel that tunnels from port 5005 locally to that IP address and port 5005 on the box.
$ ssh ec2-user#ec2-54-204-111-222.compute-1.amazonaws.com -L 5005:<ip>:5005 -N
or you can configure port forwarding over iptables
$ sudo docker inspect --format '{{ .NetworkSettings.IPAddress }}' stupefied_swartz
172.17.0.2
$ sudo iptables -t nat -A PREROUTING -p tcp --dport 5005 -j REDIRECT --to-destination 172.17.0.2:5005