I was watching nginx tutorial and for the purpose of follow up, I created Ubuntu 18.04 docker container. I installed and started nginx service as shown as in the tutorial and everything was going well. Then I removed both docker image and container that I'm working on. Despite removal of container and image, "http://104.200.23.232/" this address on my machine still returns nginx welcome page. As I think, this indicates that nginx service is still up and running. My question is that how can I stop and disable auto start of nginx service now?
Note: My host machine operating system is Windows 10 and restarting computer did not help to solve this problem.
Yeah, well blind guessing here but the IP "104.200.23.232" is registered for Linode which is a cloudhosting/vps provider, right? so probably not the IP of your local computer. Where did you get the reference for this ip? Did you try or install something on a cloudserver? I am pretty sure there are just some things mixed up.
Use "prune" to start fresh:
docker system prune
Try this to stop nginx in docker container:
service nginx stop
Related
First of all, sorry if I am not following the correct format for StackOverflow, this is my first time asking something here.
I am running Docker in an Ubuntu lxc in Proxmox but the Docker containers cannot be accessed from the internet. I am using Nginx Proxy Manager. Surprisingly, the containers worked well when I was running Docker desktop on Windows 11 but I switched to Ubuntu to try to make things easier and it didn't work so I tried Proxmox - which I had used before, and it is not working either. I have NGINX set up with Cloudflare and when I try to access for example, my Nextcloud container from the internet I get a "Web Server is down" (error code 521).
Everything works fine when I access from the local network. I can ping websites from both inside the container and the host with no lost packages so I know the containers have internet access.
I forgot to add that I have opened all the ports necessary for my Nextcloud container to work and I checked online with ismyportopen.com and it looks like the ports I need are open.
I have a VM running Ubuntu 16.04, on which I want to deploy an application packaged as a Docker container. The application needs to be able to perform an HTTP request towards a server under VPN (e.g. server1.vpn-remote.com)
I successfully configured the host VM in order to connect to the VPN through openconnect, I can turn this connection on/off using a systemd service.
Unfortunately, when I run docker run mycontainer, neither the host nor the container are able to reach server1.vpn-remote.com. Weirdly enough, there is no error displayed in the VPN connection service logs, which is stuck to the openconnect messages confirming a successful connection.
If I restart the VPN connection after starting mycontainer, the host machine is able to access server1.vpn-remote.com, but not the container. Moreover, if I issue any command like docker run/start/stop/restart on mycontainer or any other container, the connection gets broken again even for the host machine.
NOTE: I already checked on the ip routes and there seems to be no conflict between Docker and VPN subnets.
NOTE: running the container with --net="host" results in both host and container being able to access the VPN but I would like to avoid this option as I will eventually make a docker compose deployment which requires all containers to run in bridge mode.
Thanks in advance for your help
EDIT: I figured out it is a DNS issue, as I'm able to ping the IP corresponding to server1.vpn-remote.com even after the VPN connection seemed to be failing. I'm going through documentation regarding DNS management with Docker and Docker Compose and their usage of the host's /etc/resolv.conf file.
I hope you don't still need help six months later! Some of the details are different, but this sounds a bit like a problem I had. In my case the solution was a bit disappointing: after you've connected to your VPN, restart the docker daemon:
sudo systemctl restart docker
I'm making some inferences here, but it seems that, when the daemon starts, it makes some decisions/configs based on the state of the network at that time. In my case, the daemon starts when I boot up. Unsurprisingly, when I boot up, I haven't had a chance to connect to the VPN yet. As a result, my container traffic, including DNS lookups, goes through my network directly.
Hat tip to this answer for guiding me down the correct path.
I'm new to Docker and I'm Running macOS Sierra. I have installed Docker and can get the open Kitematic on localhost I can see my containers without issue and can access the site on localhost - when I switch to VirutalBox, my containers are no longer in Kitematic and I can't figure out how to access them in my browser.
Could anyone give me some insight on what to do here?
Your intention for getting the IP of your host(192.*) is unclear to me.
You can achieve this by simply adding the net flag in your docker instruction '--net host'
For more details go here.
If your intention is to make your container available to everyone, then there is no need to do it. Everyone should be able to access your docker container by accessing your machine's ip followed by port number (http://192.168.x.x:xxxx)
It is default way that the IP of a docker container will change after restarting it. I am confused why this is suggested in designing docker. Is it more reasonable to retain the IP with a simple restart? This should be distinguished from creating a new container.
IP of a docker container gets changed after a restart as of now, but the community is working here on this highly demanded feature. Meanwhile I am using pipework to assign specific IP to my docker container.
pipework docker-bridge-name-here docker-container-name 10.1.1.110/24#10.1.1.1
The only drawback is that you'll have to do it every time you restart docker.
We're starting to go down the containerization route with Docker and have created Docker versions of some of our infrastructure and applications.
Apigee is proving a little more of a struggle...we're doing a standalone install inside our Dockerfile and that works great. Once the install has finished and the container is started you can hit the UI and the management API just fine from the machine running the container.
The problem appears to be the virtualhost. Inside the container it is fine - if you enter the container (nsenter has been massively useful) you canthe run the /test/test1-sa.sh script no problems. From outside the container that virtualhost port is not accessible, even when you use the EXPOSE command inside your Dockerfile.
The only thing I maybe have to go on is the value for all the hostname entries inside our silent installation file. It is pointing to 127.0.0.1, which the Apigee docs seem to warn against.
Many thanks
Michael
Make sure you set your hostname to your external IP adress in /etc/hosts (as Docker runs on Ubuntu -- I believe it's in /etc/sysconfig/network if you're running CentOS). It should look something like this at a minimum:
127.0.0.1 localhost
172.56.12.67 MyApigeeInstance
Then running hostname -i should give you the outside ip address and the individual compoents will know how to find each other. Otherwise all components are being registered as 127.0.0.1 and the machines can't find each other.
You might also want to take a look at what ports are open for your docker image. The install doc for Apigee lists a TON of ports you need open for the various components.
I don't know if you have to do this as part of the docker image or if it there is a way to configure its underlying Ubuntu settings.