I would like to know how to unmount nginx server that appears on my port 8081.
I used this command:
docker run -d --name server -p 8081:80 nginx
However, I already stopped and removed nginx server with docker and deleted the image, but I still have port 81 busy with the server.
If you really don't have any docker container running you might have nginx installed on your system. Use one of the proposed ways here to identify the process running on 8081. You can then decide further moves like killing the processes pid.
I did it and apparently I don't have any process listening on port 8081, by default.
Related
I have a docker image with some ruby on rails environment built in (i.e. installing some rails gems and system packages) and I have an EXPOSE 3000 to expose the port at the end.
I ran a container with docker run -p 3000:3000 -ti <image> bash, then start the rails server. The logs are saying the web server is available on localhost:3000. I tried to connect to both the IPAddress as specified in docker inspect <id> and localhost on my host machine, but neither would be able to connect. What could be the problem here?
If your application is listening on localhost, it will only respond to requests from the container's localhost - that is, other processes inside the container.
To fix this you need to set the listen address of your server to listen to any address (usually, you specify this as 0.0.0.0). I've never used rails, but from a quick search, you should use the -b option.
So changing your ENTRYPOINT or CMD in your Dockerfile to contain -b 0.0.0.0 would probably do it.
I want to adapt my redis settings via custom conf file and followed the documentation for the implementation. Running my container with the following command throws no error - so far so good.
docker run --name redis-container --net redis -v .../redis:/etc/redis -d redis redis-server /etc/redis/redis.conf
To check if my config file is read I switched the default port 6379 to port 6380 but looking at my docker ports via docker ps shows the default 6379 as my port.
Is there a difference between the redis port itself and the container port or where is my problem located?
The standard Redis image Dockerfile contains the line
EXPOSE 6379
Once a port has been exposed this way, there is no way to un-expose it. Exposing a port has fairly few practical effects in modern Docker, but the most visible is that 6379/tcp will show up in the docker ps output for each exposed port even if it's not separately published (docker run -p). There's no way to remove this port number from the docker ps output.
Docker's port system (the EXPOSE directive and the docker run -p option) are a little bit disconnected from what the application inside the container is actually doing. In your case the container is configured to expose port 6379 but the process is actually listening on port 6380; Docker has no way of knowing these don't match up. Changing the application configuration won't change the container configuration, and vice versa.
As a practical matter you don't usually need to change application ports. Since this Redis will be the only thing running in its container and its corresponding isolated network namespace, it can't conflict with other Redises on the host or in other containers. If you need to remap it on the host, you can use different port numbers for -p; the second number must match what the process is listening on (and Docker can't auto-detect or check this) but the first can be any port.
docker run -p 6380:6379 ... redis
If you're trying to check whether your configuration has had an effect, running CONFIG GET via redis-cli could be a more direct way to ask what the server's configuration is.
I have a container running HAProxy version 2.0 locally on Docker port 3001.
Config file is:
global
debug
defaults
log global
mode http
timeout connect 50000
timeout client 50000
timeout server 50000
frontend main
bind *:3000
default_backend app
backend app
balance leastconn
mode http
server dummy <localhostIP>:80
Docker file is:
FROM haproxy:2.0
COPY haproxy.cfg /usr/local/etc/haproxy/haproxy.cfg
Docker Run command:
docker run -p3001 --name my-running-haproxy my-haproxy
I am issuing a postman GET to port 3000 and expecting HaProxy to redirect to my server "dummy" on local port 80. But I am not able to get any legible response back. Appreciate any inputs.
If you run the the container like you did, Docker will assign a random port on your localhost and route traffic to port 3001. You can check which port that is by running docker ps after you started the container and looking at the PORTS section:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
6b502af649be my-haproxy "/docker-entrypoint.…" 1 minute ago Up 47 minutes 0.0.0.0:32769->3001/tcp upbeat_shtern
So on my example, you can access your application on port 32769, but this number is random.
Keep in mind, that in your example, Docker routes traffic to port 3001, whereas you configured your HAProxy to bind to port 3000. You would at least need to change the docker run command to the following:
docker run -p3000 --name my-running-haproxy my-haproxy
But usually you want to have a fixed port on localhost, e.g. port 80. Start your container like this to achieve that:
docker run -p 80:3000 --name my-running-haproxy my-haproxy
Now you can access your application at localhost:80.
I am new to docker, using https://github.com/mattrayner/docker-lamp
I've read about the docker run command but still not quite getting the -p option. Is there a way to make it tell Apache to listen on a non-standard port?
I have succeeded in starting it on the default port 80, then re-configuring/re-loading Apache, from within the container, to bind itself to port 8080. But in that scenario I can't access the container's Apache from outside it via localhost:8080. (If that makes sense.)
I simply want to develop something using PHP 5.6 without disturbing anything else on my local setup, which is running PHP 7.0. If there's another way to achieve the same end, I'm good with that too.
The -p or --publish option is a host:container port mapping specifically so that you don't have to change what may already be running inside the container.
If the container is already running on port 80 but you want to access it externally (via your host or laptop) via port 8080, then you can simple run with -p 8080:80 which will map your host port 8080 to the container port 80.
Multiple containers can run and use port 80 on the same host (since the containers have their own IP address on the Docker network). But you can only expose one port at a time.
For example, if you had 3 containers you wanted to run and all of them were listening on port 80, you could start the first with -p 8080:80, the second with -p 8082:80, and the third with -p 8084:80.
The -p section of https://docs.docker.com/engine/reference/commandline/run/#publish-or-expose-port--p---expose does into this a bit deeper.
I have a Rails app deployed with Dokku on DigitalOcean. I've created a Postgres database and linked it with a Rails app. Everything worked fine until I restarted the droplet. I figured out that apps stopped working because on restart every Docker container gets a new port and Rails application isn't able to connect to it. If I run dokku postgresql:info myapp it shows the old port, but it has changed. If I manually change database.yml and push it to the dokku repo everything works.
So how do I prevent Docker from assigning different port each time the server restarts? Or maybe there is an option to change ports of running containers.
I don't have much experience with Dokku but for docker there's no such thing of A container's port.
In docker you can expose a container's port to receive incoming request and map it to specific ports in your host machine.
With that you can, for instance, run your postgres inside a container and tell docker that you wanna expose the 5432, default postgresql port, to receive incoming requests:
sudo docker run --expose=5432 -P <IMAGE> <COMMAND>
The --expose=5432 tells docker to expose the port 5432 to receive incoming connections from outside world.
The -P flag tells docker to map all exposed ports in your container to the host machine's port.
with that you can connect to postgres pointing to your host's ip:port.
If you want to map a container's port to a different host machine port you can use the -p flag:
sudo docker run --expose=5432 -p=666 <IMAGE> <COMMAND>
Not sure if this can help you with Dokku environment, but I hope so.
For more information about docker's run command see: https://docs.docker.com/reference/commandline/cli/#run