Docker Container still running after stop and remove - docker

I'm currently trying to learn docker basics.
I'm looking to get used to the CLI. So I pulled/ran some docker containers, namely an Apache Http Server and the getting started container. Afterwards, I ran docker ps, checked the IDs and ran
docker stop <id>
docker rm <id>
Now I tried setting up a local postgres db. So I pulled postgres and pgadmin and made them run on ports 5432 and 80 respectively.
Thinking, it should all be set up, i visited localhost:80 and was welcomed by the getting started page again. Despite running stop and rm.
I thought maybe something went wrong and the ports weren't freed by Docker or something similar. So I completely restarted the docker service and stopped every process related to Docker I could find. Finally I restarted my computer.
Looking at the task manager now, I could not find any traces of Docker processes or such running on my machine. Despite that, if i visit localhost:80 I am still greeted with the getting started page. And visiting localhost:8080 I am still greeted with Apaches "It works!" message.
I am at a loss here, since there is no Docker Service running, yet I am still accessing the Apache Server running locally.
Edit 1: I do not have any running servers on my local machine. I never installed, nor started any Apache http servers on my machine.
Hopefully reproducible example (the exact steps I took):
docker pull httpd
docker run -d -p 80:80 docker/getting started
Verify, that getting started is running on localhost:80
# This shows the getting started process running with id <id>
docker ps
# Stopping and removing the process
docker stop <id>
docker rm <id>
# Remove the getting started image
docker images rm <your_id>
Restart PC, go to localhost:80.
For me, this shows the getting started page again.
Then close any processes related to Docker (e.g. Docker Desktop, etc). Make sure, that com.Docker.backend.exe is also not running.

I had a similar issue. Clearing the browser cache fixed it for me. (as recommended by #derpischer)
For example in firefox: go to settings. Under "Cookies and Site Data", use the "Clear Data" button and make sure "Cached Web Content" is selected.

Related

Attaching IDE to my backend docker container stops that container's website from being accessible from host

Summary
I'm on mac. I have several docker containers, I can run all of them using docker-compose up and everything works as expected: for instance, I can access my backend container by searching http://localhost:8882/ on my browser, since port 8882 is mapped to the same port on host by using:
ports:
- "8882:8882"
Problems start when I try to attach an IDE to the backend container so as to be able to develop "from inside" that container.
I've tried using vscode's plugin "Remote - Containers" following this tutorial and also pycharm professional, which comes with the possibility to run docker configurations out of the box. On both cases I had the same result: I run the IDE configuration to attach to the container and its local website suddenly stops working, showing "this site can't be reached".
When using pycharm, I noticed that Docker Desktop shows that the backend container changed its port to 54762. But I also tried that port with no luck.
I also used this command:
docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' container_name_or_id
to get the container ip (172.18.0.4) and tried that with both ports, again, same result.
Any ideas?
Pycharm configuration
Interpreter:
This works in the sense that I can watch the libraries code installed inside the container:
Run/Debug configuration. This configuration succeeds in the sense that I can start it and it seems to be attached correctly to the backend container... though the problem previously described appears.
So, there were many things taking part in this, since this is an already huge and full of technical debt project.
But the main one is that the docker-compose I was using was running the server using uwsgi on production mode, which tampered many things... amongst which were pycharm's ability to successfully attach to the running container, debug, etc.
I was eventually able to create a new docker-compose.dev.yml file that overrided the main docker-compose file, only to change the backend server command for flask on development mode. That fixed everything.
Be mindful that for some reason flask run command inside a docker container does not allow you to see the website properly until you pass a -host=0.0.0.0 option to it. More in https://stackoverflow.com/a/30329547/5750078

Simple Nginx server on docker returns 503

I'm just starting up with Docker and the first example that I was trying to run already fails:
docker container run -p 80:80 nginx
The command successfully fetches the nginx/latest image from the Docker Hub registry and runs the new container, there is no indication in CMD of anything going wrong. When I browse to localhost:80 I get 503 (Service Unavailable). I'm doing this test on Windows 7.
I tried the same command on another computer (this time on macOS) and it worked as expected, no issues.
What might be a problem? I found some issues on SO similar to mine, but they were connected with the usage of nginx-proxy, which I don't use and don't even know what it is. I'm trying to run a normal http server.
//EDIT
When I try to bind my container to a different port, for example:
docker container run -p 4201:80 nginx
I get ERR_CONNECTION_REFUSED in Chrome, so basically connection can't be established, because destination does not exist. Why is that?
The reason why it didn't work is that on Windows, Docker publishes results on different IP than localhost. This IP given is at the top in Docker client console.

localhost not working docker windows 10

I am using VS2017 docker support. VS created DockerFile for me and when I build docker-compose file, it creates the container and runs the app on 172.x.x.x IP address. But I want to run my application on localhost.
I did many things but nothing worked. Followed the docker docs as a starter and building microsoft sample app . The second link is working perfectly but I get HTTP Error 404 when tried the first link approach.
Any help is appreciated.
Most likely a different application already runs at port 80. You'll have to forward your web site to a different port, eg:
docker run -d -p 5000:80 --name myapp myasp
And point your browser to http://localhost:5000.
When you start a container you specify which inner ports will be exposed as ports on the host through the -p option. -p 80:80 exposes the inner port 80 used by web sites to the host's port 80.
Docker won't complain though if another application already listens at port 80, like IIS, another web application or any tool with a web interface that runs on 80 by default.
The solution is to:
Make sure nothing else runs on port 80 or
Forward to a different port.
Forwarding to a different port is a lot easier.
To ensure that you can connect to a port, use the telnet command, eg :
telnet localhost 5000
If you get a blank window immediatelly, it means a server is up and running on this port. If you get a message and timeout after a while, it means nobody is running. You anc use this both to check for free ports and ensure you can connect to your container web app.
PS I run into this just a week ago, as I was trying to set up a SQL Server container for tests. I run 1 default and 2 named instances already, and docker didn't complain at all when I tried to create the container. Took me a while to realize what was wrong.
In order to access the example posted on Docker Docs, that you pointed out as not working, follow the below steps,
1 - List all the running docker containers
docker ps -a
After you run this command you should be able to view all your docker containers that are currently running and you should see a container with the name webserver listed there, if you have followed the docker docs example correctly.
2 - Get the IP address where your webserver container is running. To do that run the following command.
docker inspect -f "{{ .NetworkSettings.Networks.nat.IPAddress }}" webserver
You should now get the IP address which the webserver container is running, hope you are familiar with this step as it was even available within the building Microsoft sample app example that you attached with the question.
Access the IP address you get once running the above command and you should see the desired output.
Answering to your first question (accessing docker container with localhost in docker for windows), in Windows host you cannot access the container with localhost due to a limitation in the default NAT network stack. A more detailed explanation for this issue can be obtained by visiting this link. Seems like the docker documentation is not yet updated but this issue only exists in Windows hosts.
There is an issue reported for this as well - Follow this link to see that.
Hope this helps you out.
EDIT
The solution for this issue seems to be coming in a future Windows release. Yet that release comes out this limitation is available in Windows host. Follow this link -> https://github.com/MicrosoftDocs/Virtualization-Documentation/issues/181
For those who encountering this issue in 2022, changing localhost to 127.0.0.1 solved an issue for me.
There is other problem too
You must have correct order with parameters
This is WRONG
docker run container:latest -p 5001:80
This sequence start container but parameter -p is ignore, therefore container have no ports mapping
This is good
docker run -p 5001:80 container:latest

Some questions of Docker -p and Dockerfile

1: docker run -d -p 3000:3000 images
If i up a localhost:3000 server in container,how can i open it in my machine browser, what's the ip?
I've tried localhost:3000 or 0.0.0.0:3000.
2: I use docker pull ubuntu and docker run it, after updating and deploying the server i commmited it.So now i have one ubuntu and a new image.
Next time i run a container using this new image,
The shell scripts still needs to be sourced, also the server needs to reopened again.
How can i commit the image that it can source the scripts and deployed by itself when i docker run it.
Thanks.
I don't quite understand questions 2 or 3, can you add more context?
Regarding your question about using -p, you should be able to visit in your browser using http://localhost:3000/ . However that assumes a couple of things are true.
First, you used -p 3000:<container-port> - looks good on this point.
Second, the image you have run exposed port 3000 (EXPOSE 3000).
And third, the service running in the container is listening on 0.0.0.0:3000. If it is listening on localhost inside the container, then the port export will not work. Each container has its own localhost which is usable only inside the container. So it needs to be listening on all IPs inside the container, in order for outside connections to reach the service from outside of the container.

Docker WebLogic 11g container network issues after restart

I'm setting up Oracle WebLogic 11g (10.3.6) in a Docker container (1.11) following Bruno's guide and docker files. I'm using History to grab the files for WL 11g since it's not official supported.
I have built all required components and am able to startup containers and WebLogic just fine, however after restarting the container WebLogic slows down considerably.
The container starts with: CMD ["startWebLogic.sh"]. If I use the WL Admin Console to stop the server, or use docker stop <container_name>, then use docker start <container_name> the container will come up, but Admin Console requests take 5+ minutes to complete.
Everything works fine on a fresh container using something like docker run -d --name wlsadmin --hostname wlsadmin -p 7001:7001 1036-domain but as soon as the container is restarted everything grinds to a halt.
I am not making any changes to the defaults. Simply starting a new container, stopping the container, and starting it back up again.
Does anyone have suggestions on how to troubleshoot this issue and get to the root cause?
I have also created WL 12.1 and WL 12.2 containers that all work successfully, even after restarts, but my legacy app only runs on WL 10.3.6, so I'm really trying to figure this out for 11g and am stumped.
Thanks for any help!
Turns out this is related to WebLogic and not Docker, namely how long it takes to generate random numbers.
Here is the solution
While the docker files did attempt to compensate for this, the implementation was not successful. I was able to fix the docker files and the Admin Console's performance returned to normal.

Resources