I am following the Docker Getting Started Tutorials and I am stuck at part 3 (Docker Compose). I am operating on a fresh Ubuntu 16.04 installation. Following the tutorial from part 1 and part 2, except logging in to a Docker account and pushing the newly created image to a remote repository.
The Docker image file and python file are the same as in part 2, and the .yml file is the same as part 3 (I copy-pasted them with vim).
I can deploy a stack with docker compose apparently fine. However, when getting to the part where I am supposed to send a request via curl, I get the following response
curl: (7) Failed to connect to localhost port 80: Connection refused
This is the output of docker service ls right after returning from docker stack deploy:
ID NAME MODE REPLICAS IMAGE PORTS
m3ux2u3i6cpv getstartedlab_web replicated 1/5 username/repo:tag *:80->80/tcp
This is the output of docker container ls (fired right after docker service ls):
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
bd870fcb64f4 username/repo:tag "python app.py" 7 seconds ago Up 1 second 80/tcp getstartedlab_web.2.p9v9p34kztmu8rvht3ndg2xtb
db73404d495f username/repo:tag "python app.py" 7 seconds ago Up 1 second 80/tcp getstartedlab_web.1.z3o2t10oiidtzofsonv9cwcvd
While docker ps returns no lines.
And this is the output of docker network ls:
NETWORK ID NAME DRIVER SCOPE
5776b070996c bridge bridge local
47549d9b2e88 docker_gwbridge bridge local
59xa0454g133 getstartedlab_webnet overlay swarm
e27f62ede27d host host local
ramvt1h8ueg7 ingress overlay swarm
f0fe862c5dcc none null local
I can still run the image as a single container and get the expected result, i.e. being able to connect to it via browser or curl and get an error message related to Redis, but I do not understand why it doesn't work when I deploy a stack.
As far as Ubuntu firewall settings are concerned, I have tinkered with none since the installation, and as for Docker and Docker Compose, I have only followed the steps in the "getting started" tutorials in chapters 1-to-3, including downloading Docker Compose binaries and changing permissions with chmod as described. I also added my user to the Docker group so I don't have to sudo every time I need to run a command. I am not behind a proxy server (I am running all tests in local) and I haven't tinkered with any defaults either.
I think this may be a duplicate of this question, though it hasn't been answered nor commented on yet. It is not a duplicate of this question, as I am following a different tutorial.
UPDATE:
As it was, I was using EXACTLY the same docker-compose.yml file. The key issue was a name mismatch with the docker image name, as is visible in the docker service ls output. I thank Janshair Khan for the inspiration. What is strange is for there to be a username/repo image created apparently 9 months ago:
REPOSITORY TAG IMAGE ID CREATED SIZE
<my getting-started image>
python 2.7-slim 4fd30fc83117 7 weeks ago 138MB
hello-world latest f2a91732366c 2 months ago 1.85kB
username/repo <none> c7f5ee4d4030 9 months ago 182MB
Related
I have a single-node swarm. My stack has two services. I deployed like so:
$ docker stack deploy -c /tmp/docker-compose.yml -c /tmp/docker-compose-prod.yml ide-controller"
Creating network ide-controller_default
Creating service ide-controller_app
Creating service ide-controller_traefik
No errors. However, according to docker ps, only one container is created. The ide-controller_traefik container was not created.
When I check docker stack services, it says 0/1 for the traefik container:
ID NAME MODE REPLICAS IMAGE PORTS
az4n6brex4zi ide-controller_app replicated 1/1 boldidea.azurecr.io/ide/controller:latest
1qp623hi431e ide-controller_traefik replicated 0/1 traefik:2.3.6 *:80->80/tcp, *:443->443/tcp
Docker service logs has nothing:
$ docker service logs ide-controller_traefik -n 1000
$
There are no traefik containers in docker ps -a, so I can't check logs:
$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
922fdff58c25 boldidea.azurecr.io/ide/controller:latest "docker-entrypoint.s…" 3 minutes ago Up 3 minutes 3000/tcp ide-controller_app.1.py8jrtmufgsf3inhqxfgkzpep
How can I find out what went wrong or what is preventing the container from being created?
docker service ps <service-name/id> has an error column that can expose errors encountered by libswarm trying to create containers, such as bad image names.
Or, for a more detailed look, docker service inspect <service-name/id> has the current, and previous, service spec, as well as some root level nodes that will trace the state of the last operation and its message.
I have just installed Ubuntu 20.0 and installed docker using snap. I'm trying to run some different docker images for hbase and rabbitmq but each time I start an image, it immediately exists with 126 status.
$ sudo docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
4d58720fce3a dajobe/hbase "/opt/hbase-server" 5 seconds ago Exited (126) 4 seconds ago hbase-docker
b7a84731a05b harisekhon/hbase "/entrypoint.sh" About a minute ago Exited (126) 59 seconds ago optimistic_goldwasser
294b95ef081a harisekhon/hbase "/entrypoint.sh" About a minute ago Exited (126) About a minute ago goofy_tu
I have tried everything and tried to use docker inspect on separate images, but nothing gives away, why the containers exit out immediately. Any suggestions?
EDIT
When i run the command i run the following
$ sudo bash start-hbase.sh
It gives the output exactly like it should
Starting HBase container
Container has ID 3c3e36e1e0fbc59aa0783a4c7f3cb8690781b2d04e8f842749d629a9c25e0604
Updating /etc/hosts to make hbase-docker point to (hbase-docker)
Now connect to hbase at localhost on the standard ports
ZK 2181, Thrift 9090, Master 16000, Region 16020
Or connect to host hbase-docker (in the container) on the same ports
For docker status:
$ id=3c3e36e1e0fbc59aa0783a4c7f3cb8690781b2d04e8f842749d629a9c25e0604
$ docker inspect $id
I think the issue might be due to some permissions, because i tried to chck the logs as suggested in the comments, and get this error:
/bin/bash: /opt/hbase-server: Permission denied
Check if the filesystem is mounted with noexec option using mount command or in /etc/fstab. If yes, remove it and remount the filesystem (or reboot).
Quick solution is restart service docker and network-manager
I was following this URL: How to use local docker images with Minikube?
I couldn't add a comment, so thought of putting my question here:
On my laptop, I have Linux Mint OS. Details as below:
Mint version 19,
Code name : Tara,
PackageBase : Ubuntu Bionic
Cinnamon (64-bit)
As per one the answer on the above-referenced link:
I started minikube and checked pods and deployments
xxxxxxxxx:~$ pwd
/home/sj
xxxxxxxxxx:~$ minikube start
xxxxxxxxxx:~$ kubectl get pods
xxxxxxxxxx:~$ kubectl get deployments
I ran command docker images
xxxxxxxxx:~$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
<username>/spring-docker-01 latest e10f88e1308d 6 days ago 640MB
openjdk 8 81f83aac57d6 4 weeks ago 624MB
mysql 5.7 563a026a1511 4 weeks ago 372MB
I ran below command:
eval $(minikube docker-env)
Now when I check docker images, looks like as the README describes, it reuses the Docker daemon from Minikube with eval $(minikube docker-env).
xxxxxxxxxxxxx:~$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
nginx alpine 33c5c6e11024 9 days ago 17.7MB
k8s.gcr.io/coredns 1.2.2 367cdc8433a4 5 weeks ago 39.2MB
k8s.gcr.io/kubernetes-dashboard-amd64 v1.10.0 0dab2435c100 5 weeks ago 122MB
k8s.gcr.io/kube-proxy-amd64 v1.10.0 bfc21aadc7d3 6 months ago 97MB
k8s.gcr.io/kube-controller-manager-amd64 v1.10.0 ad86dbed1555 6 months ago 148MB
k8s.gcr.io/kube-apiserver-amd64 v1.10.0 af20925d51a3 6 months ago 225MB
k8s.gcr.io/kube-scheduler-amd64 v1.10.0 704ba848e69a 6 months ago 50.4MB
k8s.gcr.io/etcd-amd64 3.1.12 52920ad46f5b 6 months ago 193MB
k8s.gcr.io/kube-addon-manager v8.6 9c16409588eb 7 months ago 78.4MB
k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64 1.14.8 c2ce1ffb51ed 9 months ago 41MB
k8s.gcr.io/k8s-dns-sidecar-amd64 1.14.8 6f7f2dc7fab5 9 months ago 42.2MB
k8s.gcr.io/k8s-dns-kube-dns-amd64 1.14.8 80cc5ea4b547 9 months ago 50.5MB
k8s.gcr.io/pause-amd64 3.1 da86e6ba6ca1 9 months ago 742kB
gcr.io/k8s-minikube/storage-provisioner v1.8.1 4689081edb10 11 months ago 80.8MB
k8s.gcr.io/echoserver 1.4 a90209bb39e3 2 years ago 140MB
Note: if noticed docker images command pulled different images before and after step 2.
As I didn't see the image that I wanted to put on minikube, I pulled it from my docker hub.
xxxxxxxxxxxxx:~$ docker pull <username>/spring-docker-01
Using default tag: latest
latest: Pulling from <username>/spring-docker-01
05d1a5232b46: Pull complete
5cee356eda6b: Pull complete
89d3385f0fd3: Pull complete
80ae6b477848: Pull complete
40624ba8b77e: Pull complete
8081dc39373d: Pull complete
8a4b3841871b: Pull complete
b919b8fd1620: Pull complete
2760538fe600: Pull complete
48e4bd518143: Pull complete
Digest: sha256:277e8f7cfffdfe782df86eb0cd0663823efc3f17bb5d4c164a149e6a59865e11
Status: Downloaded newer image for <username>/spring-docker-01:latest
Verified if I can see that image using "docker images" command.
xxxxxxxxxxxxx:~$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
<username>/spring-docker-01 latest e10f88e1308d 6 days ago 640MB
nginx alpine 33c5c6e11024 10 days ago 17.7MB
Then I tried to build image as stated in referenced link step.
xxxxxxxxxx:~$ docker build -t <username>/spring-docker-01 .
unable to prepare context: unable to evaluate symlinks in Dockerfile path: lstat /home/sj/Dockerfile: no such file or directory
As the error states that dockerfile doesn't exist at the location, I am not sure where exactly I can see dockerfile for the image I had pulled from docker hub.
Looks like I have to go to the location where the image has been pulled and from that location, I need to run the above-mentioned command. Please correct me wrong.
Below are the steps, I will be doing after I fix the above-mentioned issue.
# Run in minikube
kubectl run hello-foo --image=myImage --image-pull-policy=Never
# Check that it's running
kubectl get pods
UPDATE-1
There is mistake in above steps.
Step 6 is not needed. Image has already been pulled from docker hub, so no need of docker build command.
With that, I went ahead and followed instructions as mentioned by #aurelius in response.
xxxxxxxxx:~$ kubectl run sdk-02 --image=<username>/spring-docker-01:latest --image-pull-policy=Never
kubectl run --generator=deployment/apps.v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl create instead.
deployment.apps/sdk-02 created
Checked pods and deployments
xxxxxxxxx:~$ kubectl get pods
NAME READY STATUS RESTARTS AGE
sdk-02-b6db97984-2znlt 1/1 Running 0 27s
xxxxxxxxx:~$ kubectl get deployments
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
sdk-02 1 1 1 1 35s
Then exposed deployment on port 8084 as I was using other ports like 8080 thru 8083
xxxxxxxxx:~$ kubectl expose deployment sdk-02 --type=NodePort --port=8084
service/sdk-02 exposed
Then verified if service has been started, checked if no issue on kubernetes dashboard and then checked the url
xxxxxxxxx:~$ kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 7h
sdk-02 NodePort 10.100.125.120 <none> 8084:30362/TCP 13s
xxxxxxxxx:~$ minikube service sdk-02 --url
http://192.168.99.101:30362
When I tried to open URL: http://192.168.99.101:30362 in browser I got message:
This site can’t be reached
192.168.99.101 refused to connect.
Search Google for 192 168 101 30362
ERR_CONNECTION_REFUSED
So the question : Is there any issue with steps performed?
UPDATE-2
The issue was with below step:
xxxxxxxxx:~$ kubectl expose deployment sdk-02 --type=NodePort --port=8084
service/sdk-02 exposed
Upon checking Dockerfile of my image : <username>/spring-docker-01:latest I was exposing it to 8083 something like EXPOSE 8083
May be that was causing issue.
So I went ahead and changed expose command:
xxxxxxxxx:~$ kubectl expose deployment sdk-02 --type=NodePort --port=8083
service/sdk-02 exposed
And then it started working.
If anyone has something to add to this, please feel free.
However I am still not sure where exactly I can see dockerfile for the image I had pulled from docker hub.
The docker build does not know what you mean by your command, because flag -t requires specific format:
--tag , -t Name and optionally a tag in the ‘name:tag’ format
xxxxxxxxxx:~/Downloads$ docker build -t shivnilesh1109/spring-docker-01 .
So the proper command here should be:
docker build -t shivnilesh1109/spring-docker-01:v1(1) .(2)
(1) desired name of your container:tag
(2) directory in which your dockerfile is.
After you proceed to minikube deployment, it will be enough just to run:
kubectl run *desired name of deployment/pod* --image=*name of the container with tag* --image-pull-policy=Never
If this would not fix your issue, try adding the path to Dockerfile manually. I've tested this on my machine and error stopped after using proper tagging of the image and tested this also with full path to Dockerfile otherwise I had the same error as you.
For you UPDATE-2 question, also to help you to understand the port exposed in the Dockerfile and in the command kubectl expose.
Dockerfile:
The EXPOSE instruction does not actually publish the port. It
functions as a type of documentation between the person who builds the
image and the person who runs the container, about which ports are
intended to be published.
For more details, see EXPOSE.
Kubectl expose:
--port: The port that the service should serve on. Copied from the resource being exposed, if unspecified
--target-port: Name or number for the port on the container that the service should direct traffic to. Optional.
For more details, see kubectl expose.
So I think you should add the parameters --target-port with the port that you exposed in the Dockerfile. And then the port mapping will be correct.
You can just create a Dockerfile with this content:
FROM shivnilesh1109/spring-docker-01
Then run:
docker build -t my-spring-docker-01 .
try adding your local docker image to minikube's cache , like so :
minikube cache add docker-image-name:latesttag
then set imagePullPolicy:Never in the yaml file.
What would prevent docker service create ... command from completing? It never returns to the command prompt or reports any kind of error. Instead, it repeatedly switches between, "new", "assigned", "ready", and "starting".
Here's the command:
docker service create --name proxy -p 80:80 -p 443:443 -p 8080:8080 --network proxy -e MODE=swarm vfarcic/docker-flow-proxy
This is from "The DevOps 2.1 Toolkit - Docker Swarm", a book by Viktor Farcic, and I've come to a problem I can't get past...
I'm trying to run a reverse proxy service on a swarm of three nodes, all running locally on a LinuxMint system, and docker service create never returns to the command prompt. It sits there and loops between statuses and always says overall progress: 0 out of 1 tasks...
This is on a cluster of three nodes all running locally, created with docker-machine with one manager and two workers.
While testing, I've successfully created other overlay networks and other services using the same cluster of nodes.
As far as I can tell, the only significant differences between this and the other services I created for these nodes are:
There are three published ports. I double-checked the host and there's nothing else listening on any of those ports.
I don't have a clue what the vfarcic/docker-flow-proxy does.
I'm running this on a (real, not VirtualBox/VMWare/Hyper-V) Linux Mint 19 system (based on Ubuntu 18.04) with Docker 18.06. My host has 16Gb RAM and plenty of spare disk space. It was set up for the sole purpose of learning Docker after having too many problems on a Windows host.
Update:
If I Ctrl-C in the terminal where I tried to create the service, I get this message:
Operation continuing in background.
Use docker service ps p7jfgfz8tqemrgbg1bn0d06yp to check progress.
If I use docker service ps --no-trunc p473obsbo9tgjom1fd61or5ap I get:
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
dbzvr67uzg5fq9w64uh324dtn proxy.1 dockerflow/docker-flow-proxy:latest#sha256:e4e4a684d703bec18385caed3b3bd0482cefdff73739ae21bd324f5422e17d93 node-1 Running Starting 11 seconds ago
vzlttk2xdd4ynmr4lir4t38d5 \_ proxy.1 dockerflow/docker-flow-proxy:latest#sha256:e4e4a684d703bec18385caed3b3bd0482cefdff73739ae21bd324f5422e17d93 node-1 Shutdown Failed 16 seconds ago "task: non-zero exit (137): dockerexec: unhealthy container"
q5dlvb7xz04cwxv9hpr9w5f7l \_ proxy.1 dockerflow/docker-flow-proxy:latest#sha256:e4e4a684d703bec18385caed3b3bd0482cefdff73739ae21bd324f5422e17d93 node-1 Shutdown Failed 49 seconds ago "task: non-zero exit (137): dockerexec: unhealthy container"
z86h0pcj21joryb3sj59mx2pq \_ proxy.1 dockerflow/docker-flow-proxy:latest#sha256:e4e4a684d703bec18385caed3b3bd0482cefdff73739ae21bd324f5422e17d93 node-1 Shutdown Failed about a minute ago "task: non-zero exit (137): dockerexec: unhealthy container"
uj2axgfm1xxpdxrkp308m28ys \_ proxy.1 dockerflow/docker-flow-proxy:latest#sha256:e4e4a684d703bec18385caed3b3bd0482cefdff73739ae21bd324f5422e17d93 node-1 Shutdown Failed about a minute ago "task: non-zero exit (137): dockerexec: unhealthy container"
So, no mention of node-2 or node-3 and I don't know what the error message means.
If I then use docker service logs -f proxy I get the following messages repeating every minute or so:
proxy.1.l86x3a6md766#node-1 | 2018/09/11 12:48:46 Starting HAProxy
proxy.1.l86x3a6md766#node-1 | 2018/09/11 12:48:46 Getting certs from http://202.71.99.195:8080/v1/docker-flow-proxy/certs
I can only guess that it fails to get whatever certs it's looking for. There doesn't seem to be anything at that address, but an IP address lookup shows that it leads to my ISP. Where's that coming from?
There may be more than one motives why docker can't create the service. The simplest to see why is to see the last status of the service by running:
docker service ps --no-trunc proxy
where you have an ERROR column that describes the last error.
Alternatively you can run
docker service logs -f proxy
to see what's going on inside the proxy service.
This was an old question, but I was experimenting with docker and had this same issue.
For me this issue was resolved by publishing the ports before stipulating the name.
I believe docker functions after stipulating the name as: --name "name" "image to use" "commands to pass to image on launch"
So it continues attempting to pass those options to the container as commands and fails instead of using them as options when running the service.
Add -t flag into command to prevent container(s) exit.
The same problem happened to me as well.
I think in my case, the image had some kind of error. I did the following experiments:
I first pulled the image, and created the service
docker service create --name apache --replicas 5 -p 5000:80 [image repo]
The same error from the question happened.
Like the first answer above, I tried a different order
docker service create --replicas 5 -p 5000:80 --name apache [image repo]
Same error occurred
I tried to create the service w/o the image downloaded locally, but the same error occurred.
I tried a different image, which worked before, then the service was created successfully.
The first image I used was swedemo/firstweb. (This is NOT my repo, it's my prof's)
I didn't examine what part of this image was the problem but hope this can give clue to others.
For me, it was some startup script issues causing the container to exit with 0 code.
As my docker service has to have at least one replica (given as constraints), docker swarm try to recreate it. The container fail right after being relaunched by docker daemon hence causing the loop.
based on Moving docker-compose containersets
I have loaded the images :
$ docker images -a
REPOSITORY TAG IMAGE ID CREATED SIZE
br/irc latest 3203cf074c6b 23 hours ago 377MB
openjdk 8u131-jdk-alpine a2a00e606b82 5 days ago 101MB
nginx 1.13.3-alpine ba60b24dbad5 4 months ago 15.5MB
but now i want to run them, as they would run with docker-compose, but i cannot find any example.
here is the docker-compose.yml
version: '3'
services:
irc:
build: irc
hostname: irc
image: br/irc:latest
command: |
-Djava.net.preferIPv4Stack=true
-Djava.net.preferIPv4Addresses
run-app
volumes:
- ./br/assets/br.properties:/opt/br/src/java/br.properties
nginx:
hostname: nginx
image: nginx:1.13.3-alpine
ports:
- "80:80"
links:
- irc:irc
volumes:
- ./nginx/assets/default.conf:/etc/nginx/conf.d/default.conf
so how can i run the container, and attach to it, to see if its running, and in what order do i run these three images. Just started with docker, so not sure of the typical workflow ( build, run, attach etc )
so even though i do have docker-compose yml file, but since i have the build images from another host, can i possibly run docker commands to run and execute the images ? making sure that the local images are being referenced, and not the ones from docker registry.
Thanks #tgogos, this does give me a general overview, but specifically i was looking for:
$ docker run -dit openjdk:8u131-jdk-alpine
then:
$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
cc6ceb8a82f8 openjdk:8u131-jdk-alpine "/bin/sh" 52 seconds ago Up 51 seconds vibrant_hodgkin
shows its running
2nd:
$ docker run -dit nginx:1.13.3-alpine
3437cf295f1c7f1c27bc27e46fd46f5649eda460fc839d2d6a2a1367f190cedc
$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
3437cf295f1c nginx:1.13.3-alpine "nginx -g 'daemon ..." 20 seconds ago Up 19 seconds 80/tcp vigilant_kare
cc6ceb8a82f8 openjdk:8u131-jdk-alpine "/bin/sh" 2 minutes ago Up 2 minutes vibrant_hodgkin
then: finally:
[ec2-user#ip-10-193-206-13 DOCKERLOCAL]$ docker run -dit br/irc
9f72d331beb8dc8ccccee3ff56156202eb548d0fb70c5b5b28629ccee6332bb0
$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
9f72d331beb8 br/irc "/opt/irc/grailsw" 8 seconds ago Up 7 seconds 8080/tcp cocky_fermi
3437cf295f1c nginx:1.13.3-alpine "nginx -g 'daemon ..." 56 seconds ago Up 55 seconds 80/tcp vigilant_kare
cc6ceb8a82f8 openjdk:8u131-jdk-alpine "/bin/sh" 2 minutes ago Up 2 minutes vibrant_hodgkin
All three UP !!!!
Your question is about docker-compose but you also ask things about run, build, attach which makes me think I should try to help you with some basic information (which wasn't so easy for me to cope with a couple of months ago :-)
images
Images are somehow the base from which containers are created. Docker pulls images from http://hub.docker.com and stores them in your host to be used every time you create a new container. Changes in the container do not affect the base image.
To pull images from docker hub, use docker pull .... To build your own images start reading about Dockerfiles. A simple Dockerfile (in an abstract way) would look like this:
FROM ubuntu # base image
ADD my_super_web_app_files # adds files for your app
CMD run_my_app.sh # starts serving requests
To create the above image to your host, you use docker build ... and this is a very good way to build your images, because you know the steps taken to be created.
If this procedure takes long, you might consider later to store the image in a docker registry like http://hub.docker.com, so that you can pull it from any other machine easily. I had to do this, when dealing with ffmpeg on a Raspberry Pi (the compilation took hours, I needed to pull the already created image, not build it from scratch again in every Raspberry).
containers
Containers are based on images, you can have many different containers from the same image on the same host. docker run [image] creates a new container based on that image and starts it. Many people here start thinking containers are like mini-VMs. They are not!
Consider a container as a process. Every container has a CMD and when started, executes it. If this command finishes, or fails, the container stops, exits. A good example for this is nginx: go check the official Dockerfile, the command is:
CMD ["nginx"]
If you want to see the logs from the CMD, you can docker attach ... to your container. You can also docker stop ... a running container or docker start ... an already stopped one. You can "get inside" to type commands by:
docker exec -it [container_name] /bin/bash
This opens a new tty for you to type commands, while the CMD continues to run.
To read more about the above topics (I've only scratched the surface) I suggest you also read:
Is it possible to start a shell session in a running container (without ssh)
Docker - Enter Running Container with new TTY
How do you attach and detach from Docker's process?
Why docker container exits immediately
~jpetazzo: If you run SSHD in your Docker containers, you're doing it wrong!
docker-compose
After you feel comfortable with these, docker-compose will be your handy tool which will help you manipulate many containers with single line commands. For example:
docker compose up
Builds, (re)creates, starts, and attaches to containers for a service.
Unless they are already running, this command also starts any linked services.
The docker-compose up command aggregates the output of each container (essentially running docker-compose logs -f). When the command exits, all containers are stopped. Running docker-compose up -d starts the containers in the background and leaves them running
To run your docker-compose file you would have to execute:
docker-compose up -d
Then to see if your containers are running you would have to run:
docker ps
This command will display all the running containers
Then you could run the exec command which will allow you to enter inside a running container:
docker-compose exec irc
More about docker-compose up here: https://docs.docker.com/compose/reference/up/