I am starting up a couple of containers using a docker-compose file. I am not specifying ports for one of the services, which means they should be randomly assigned, and they are.
The problem is that Docker Desktop is not showing the assigned ports in the Containers table, nor when I inspect the specific container.
However, if I use the docker container ls command in cmd, I can clearly see the assigned ports and I can access the containerized app by using them.
What makes it even more confusing is that this happens randomly, sometimes the ports show up in both places. I am trying to find a way to always see the assigned ports within Docker Desktop.
Related
Docker n00b here. I have conatinerized programs running on two different physical computers. on '200', I have node-red, deepstack, mosquito, etc. on '199', I have iSpy (NVR) plus a couple others. I have installed docker swarm, and I am using portainer as a front end. Portainer renders the manager and worker just fine and it seems the swarm is working. Here's the problem I cant figure out:
I need Node-red (on '200') to read a file that is saved on '199', then pass it through deepstack and do a bunch of other things, but I cant for the life of me figure out how to get node-red to read from the other physical machine. I figure if I can get some kind of persistent volume that is shared between all the different containers, I should be good, but when I create a container, it can only be seen by the host its on (not the other one). In other words, I can create a voilume on '200' and have node-red and deepstack see the same files, but ispy (on '199') doesnt even have the option of mounting that same volume.
So....what am I missing. I KNOW there is an easy solution, but I cant seem to find it.
Any help would be appreciatted.
I have tried to connect them using a single internal net5work (through Docker Swarm), but Node-red doesnt seem to want to see the other machine when I use the internal ip for it.
I tried to create a persistent volume and attach it to the several docker containers, but it seems the volumes can only be connected to the containers that are on the same host
I tried to bind to folders outside of docker and then samba share them across the home network, but node-red wouldnt see that folder when using the ip address of the smb share.
I know that containers are a form of isolation between the app and the host (the managed running process). I also know that container images are basically the package for the runtime environment (hopefully I got that correct). What's confusing to me is when they say that a Docker image doesn't retain state. So if I create a Docker image with a database (like PostgreSQL), wouldn't all the data get wiped out when I stop the container and restart? Why would I use a database in a Docker container?
It's also difficult for me to grasp LXC. On another question page I see:
LinuX Containers (LXC) is an operating system-level virtualization
method for running multiple isolated Linux systems (containers) on a
single control host (LXC host)
What does that exactly mean? Does it mean I can have multiple versions of Linux running on the same host as long as the host support LXC? What else is there to it?
LXC and Docker, Both are completely different. But we say both are container holders.
There are two types of Containers,
1.Application Containers: Whose main motto is to provide application dependencies. These are Docker Containers (Light Weight Containers). They run as a process in your host and gets all the things done you want. They literally don't need any OS Image/ Boot Up thing. They come and they go in a matter of seconds. You cannot run multiple process/services inside a docker container. If you want, you can do run multiple process inside a docker container, but it is laborious. Here, resources (CPU, Disk, Memory, RAM) will be shared.
2.System Containers: These are fat Containers, means they are heavy, they need OS Images
to launch themselves, at the same time they are not as heavy as Virtual Machines, They are very similar to VM's but differ in architecture a bit.
In this, Let us say Ubuntu as a Host Machine, if you have LXC installed and configured in your ubuntu host, You can run a Centos Container, a Ubuntu(with Differnet Version), a RHEL, a Fedora and any linux flavour on top of a Ubuntu Host. You can also run multiple process inside an LXC contianer. Here also resoucre sharing will be done.
So, If you have a huge application running in one LXC Container, it requires more resources, simultaneously if you have another application running inside another LXC container which require less resources. The Container with less requirement will share the resources with the container with more resource requirement.
Answering Your Question:
So if I create a Docker image with a database (like PostgreSQL), wouldn't all the data get wiped out when I stop the container and restart?
You won't create a database docker image with some data to it(This is not recommended).
You run/create a container from an image and you attach/mount data to it.
So, when you stop/restart a container, data will never gets lost if you attach that data to a volume as this volume resides somewhere other than the docker container (May be a NFS Server or Host itself).
Does it mean I can have multiple versions of Linux running on the same host as long as the host support LXC? What else is there to it?
Yes, You can do this. We are running LXC Containers in our production.
Okay so in Vagrant/VVV you can assign different hostnames to your different projects so when you go to http://myproject-1.dev your website shows up.
This is very convenient if you are working on dozens of projects at the same time, As far as I know such thing is not possible in docker (it can't touch hosts file), My question is, is there something similar we can do in Docker? Some automated tool maybe?
Using docker for windows.
Hostnames can map many containers together. In docker compose, there's a hostname option. But that's only within the Docker network bridge, not available to the host
Docker isn't a VM (although it runs within one in Windows).
You can edit your hosts file to have the HyperVisor available, but you're supposed to have the host ports forwarded into the container.
Use localhost, not any hostname.
If you prefer your Vagrant patterns, continue using it, but provision Docker containers from it, or use Docker Machine
I'm looking forward to moving an application based on two linked contianers to a different host. The first container comes from an official MySQL image. The second container comes from an official Wordpress image. WP container is linked to the MySQL container.
These containers have evolved over time. Different templates and data. I'd like to migrate the containers to another host. Right now, if I stop the containers I only have to issue a docker start mysql and a docker start wp and all the context (links, ports, config, wahtever...) is maintained. I don't have to specify which ports I expose, or what links are in place.
What I would expect, and I don't know if docker offers, is to:
export the containers
move them to the new host
import the container in the new host
in the new host issue a docker start mysql and a docker start wp
Is this possible? If not what would be the way to get the exact same infraestructure up and running in another host?
I have tried using export/import and save/load. In both cases what you get imported is an image, not a container.
Can I create a container using docker run <image> without the --link option and link other containers to it afterwards? If so, how do I link these containers then?
Thats how you normally would do it. Fire up container A and start container B with --link B:resourcename. Inside container B, you can now get to the stuff container A EXPOSEs, with the info you can see inside the environment-variables env (they will be named something with resourcename in this case.
You can not do this the other way around (as I thought your question was originally about). The information the container needs to get to resources on the other is available as environment-variables. Which you cant inject into a running process (as far as I know..).
Of course yes,but you can only access other containers by ip (usually 172.17.1.x).
You can use
docker inspect container_id
to find other containers ip.