How can I stop the kafka container? - docker

I setup my kafka containers by follow the tutorial from here: https://success.docker.com/article/getting-started-with-kafka
Then I found that I can't remove the container anymore, even though I tried docker container prune. The containers are still running.
What should I do?
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
caa0b94c1d98 qnib/plain-kafka:1.1.0 "/usr/local/bin/entr…" 3 minutes ago Up 3 minutes (healthy) broker.2.3fij6pt90qt9sb9aco0i2dpys
b888cb6f783a qnib/plain-kafka:1.1.0 "/usr/local/bin/entr…" 3 minutes ago Up 3 minutes (healthy) broker.3.xqmjnfnfg7ha46lf6drlrq4ki
dcdda2d778c2 qnib/plain-kafka:1.1.0 "/usr/local/bin/entr…" 3 minutes ago Up 3 minutes (healthy) broker.1.gtgluxt6q58z2irzgfmu969ba
843def0b24fb qnib/plain-zkui "/usr/local/bin/entr…" 3 minutes ago Up 3 minutes (healthy) zkui.1.7zks618eae8sp4woc7araydix
d7ced19be88c qnib/plain-kafka-manager:2018-04-25 "/usr/local/bin/entr…" 3 minutes ago Up 3 minutes (healthy) manager.1.jdu5gnprhr4d982vz50511rhg
a67ac962e682 qnib/plain-zookeeper:2018-04-25 "/usr/local/bin/entr…" 3 minutes ago Up 3 minutes (healthy) 2181/tcp, 2888/tcp, 3888/tcp zookeeper.1.xar7cmdgozdj79orow0bmj3ev
880121f2fee5 qnib/golang-kafka-producer:2018-05-01.5 "kafka-producer" 3 minutes ago Up 3 minutes (healthy) producer.2.hety8za590v1twdgj2byvrmse
b6487d29812e qnib/golang-kafka-producer:2018-05-01.5 "kafka-producer" 3 minutes ago Up 3 minutes (healthy) producer.1.5oz02c8cw5oefc97xbarq5qoa
8b3a81905e90 qnib/golang-kafka-producer:2018-05-01.5 "kafka-producer" 3 minutes ago Up 3 minutes (healthy) producer.3.p8uh3hzr22fgm7u4gl1p3fiyw

I found I have to use docker service rm to remove the service due to the replicas settings.

The prune command doesn't work against running containers. You will need to either stop or kill them.
docker kill caa0b94c1d98

Related

Setting up Confluent Kafka Community Locally - broker container keeps exiting

I'm trying to setup Kafka locally and facing an issue. Whenever I run docker compose up, all containers are up correctly. After sometime, the broker container stops running for some reason. There is no error in the container logs.
Below is the status of all docker containers:
0c27a63bb0e7 confluentinc/ksqldb-examples:5.5.1 "bash -c 'echo Waiti…" 6 minutes ago Up 6 minutes ksql-datagen
4e4a30204ccc confluentinc/cp-ksqldb-cli:5.5.1 "/bin/sh" 6 minutes ago Up 6 minutes ksqldb-cli
61b86ff2a6d6 confluentinc/cp-ksqldb-server:5.5.1 "/etc/confluent/dock…" 6 minutes ago Up 6 minutes (health: starting) 0.0.0.0:8088->8088/tcp, :::8088->8088/tcp ksqldb-server
2e022b64a760 cnfldemos/kafka-connect-datagen:0.3.2-5.5.0 "/etc/confluent/dock…" 6 minutes ago Exited (137) 5 minutes ago connect
3c7d273683fb confluentinc/cp-kafka-rest:5.5.1 "/etc/confluent/dock…" 6 minutes ago Exited (137) 5 minutes ago rest-proxy
6b6d36fb9d88 confluentinc/cp-schema-registry:5.5.1 "/etc/confluent/dock…" 6 minutes ago Up 6 minutes 0.0.0.0:8081->8081/tcp, :::8081->8081/tcp schema-registry
3bb20335ecd1 confluentinc/cp-kafka:5.5.1 "/etc/confluent/dock…" 6 minutes ago Exited (137) 5 minutes ago broker
7b2f922ef8ef confluentinc/cp-zookeeper:5.5.1 "/etc/confluent/dock…" 6 minutes ago Up 6 minutes 2888/tcp, 0.0.0.0:2181->2181/tcp, :::2181->2181/tcp, 3888/tcp zookeeper
All I want is a single node Kafka cluster with Schema Registry up and running locally. Any pointers are appreciated.
137 exit code is memory related.
If all you want is Kafka and the Schema Registry, remove KSQLDB stuff, REST Proxy, and DataGen containers
I'd also suggest using a later image tag, such as 7.3.1

Airflow Docker Unhealthy trigerrer

Im trying to setup airflow on my machine using docker and the docker-compose file provided by airflow here : https://airflow.apache.org/docs/apache-airflow/stable/start/docker.html#docker-compose-yaml
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d4d8de8f7782 apache/airflow:2.2.0 "/usr/bin/dumb-init …" About a minute ago Up About a minute (healthy) 8080/tcp airflow_airflow-scheduler_1
3315f125949c apache/airflow:2.2.0 "/usr/bin/dumb-init …" About a minute ago Up About a minute (healthy) 8080/tcp airflow_airflow-worker_1
2426795cb59f apache/airflow:2.2.0 "/usr/bin/dumb-init …" About a minute ago Up About a minute (healthy) 0.0.0.0:8080->8080/tcp, :::8080->8080/tcp airflow_airflow-webserver_1
cf649cd645bb apache/airflow:2.2.0 "/usr/bin/dumb-init …" About a minute ago Up About a minute (unhealthy) 8080/tcp airflow_airflow-triggerer_1
fa6b181113ae apache/airflow:2.2.0 "/usr/bin/dumb-init …" About a minute ago Up About a minute (healthy) 0.0.0.0:5555->5555/tcp, :::5555->5555/tcp, 8080/tcp airflow_flower_1
b6e05f63aa2c postgres:13 "docker-entrypoint.s…" 2 minutes ago Up 2 minutes (healthy) 5432/tcp airflow_postgres_1
177475be25a3 redis:latest "docker-entrypoint.s…" 2 minutes ago Up 2 minutes (healthy) 6379/tcp airflow_redis_1
I followed all steps as described in this URL, every airflow component is working great but the airflow trigerrer shows an unhealthy status :/
Im kinda new to docker i just know the basics and i don't really know how to debug that, can anyone help me up ?
Try to follow all steps on their website including mkdir ./dags ./logs ./plugins echo -e "AIRFLOW_UID=$(id -u)\nAIRFLOW_GID=0" > .env.
I don't know but it works then, but still unhealthy,
airflow.apache.org/docs/apache-airflow/stable/start/docker.html

how to use the same container which was exited in docker

I had created a lot of containers. I am new to this docker container system.
I do a docker ps -a then I get following result
debian#osboxes:~$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a6a62181dcf0 python "/bin/bash" About a minute ago Exited (127) 8 seconds ago try1
ed5ef0b8155d 1d1a162a72a6 "/bin/bash" 22 hours ago Exited (0) 22 hours ago pedantic_ellis
19a4eff3b5e5 1d1a162a72a6 "/bin/bash" 22 hours ago Exited (0) 22 hours ago assignment4
30933891f08c eeadc22d21a9 "python3" 22 hours ago Exited (0) 22 hours ago python
ccdab94fd32f 1d1a162a72a6 "/bin/bash" 28 hours ago Exited (0) 24 hours ago confident_wu
ce462ecfc5f2 1d1a162a72a6 "/bin/bash -v /home/…" 28 hours ago Exited (127) 28 hours ago stupefied_grothendieck
6123f134934c 1d1a162a72a6 "/bin/bash" 28 hours ago Exited (1) 28 hours ago stupefied_taussig
0ed23a8112a4 1d1a162a72a6 "/bin/bash" 29 hours ago Exited (0) 29 hours ago vigilant_bartik
c343731b7cde 1d1a162a72a6 "/bin/bash" 30 hours ago Exited (0) 29 hours ago gallant_ardinghelli
2f95d3b4c1b8 1d1a162a72a6 "/bin/bash" 30 hours ago Created nice_hermann
5ebe9f18c744 1d1a162a72a6 "/bin/bash" 46 hours ago Created pensive_easley
c1b43edfafb9 1d1a162a72a6 "/bin/bash" 46 hours ago Exited (1) 22 hours ago adoring_williams
42dea69d1d4e 1d1a162a72a6 "/bin/bash" 46 hours ago Created funny_austin
6f736902e650 1d1a162a72a6 "/bin/bash" 46 hours ago Exited (1) 46 hours ago strange_ride
09306e5ec5d1 1d1a162a72a6 "--name=kaushik" 2 days ago Created pensive_shtern
699fb2a23e1c 1d1a162a72a6 "--name=kaushik" 2 days ago Created sharp_feistel
9f7b29ab512e 1d1a162a72a6 "--name=kaushik" 2 days ago Created elastic_payne2
25bfc74fab3b 1d1a162a72a6 "/bin/bash" 2 days ago Exited (1) 2 days ago festive_einstein
e658dd320297 1d1a162a72a6 "/bin/bash" 2 days ago Exited (255) 2 minutes ago objective_napier
ebae378d9152 1d1a162a72a6 "/bin/bash" 2 days ago Exited (1) 2 days ago brave_ritchie
23c7f4293b30 hadoop-build-1001 "/bin/bash" 2 days ago Exited (0) 2 days ago suspicious_lumiere
5090081f6809 hadoop-build-1001 "/bin/bash" 2 days ago Exited (1) 2 days ago quizzical_keller
425d59be9cbf hadoop-build-1001 "/bin/bash" 2 days ago Exited (0) 2 days ago distracted_lederberg
11c55ce7f011 hadoop-build-1001 "/bin/bash" 2 days ago Created elastic_noyce
1ccaf0477995 hadoop-build-1001 "/bin/bash" 2 days ago Created mystifying_tu
62528115f4b7 hadoop-build-1001 "/bin/bash" 2 days ago Created determined_meninsky
fca64af2f595 hadoop-build-1001 "/bin/bash" 2 days ago Created elastic_goldwasser
eecb3153bded hadoop-build-1001 "/bin/bash" 2 days ago Created cool_cray
30b6d61fcac9 hadoop-build-1001 "/bin/bash" 2 days ago Exited (1) 2 days ago quirky_kapitsa
cf992a8b8286 hadoop-build-1001 "/bin/bash" 2 days ago Created hungry_goldstine
0f9a951f7593 hadoop-build-1001 "/bin/bash" 2 days ago Created crazy_wright
e25dcf8a8be8 hadoop-build-1001 "/bin/bash" 2 days ago Created bold_pasteur
73d068e0d756 hadoop-build-1001 "/bin/bash" 2 days ago Exited (0) 2 days ago condescending_goodall
adda325294cd hadoop-build-1001 "/bin/bash" 2 days ago Exited (0) 2 days ago serene_wilson
75a9a3262505 hadoop-build-1001 "/bin/bash" 2 days ago Exited (0) 2 days ago hardcore_khorana
e38726a74e9b hadoop-build-1001 "/bin/bash" 3 days ago Exited (255) 2 days ago beautiful_clarke
4060dbc85d2d hadoop-build-1001 "/bin/bash" 3 days ago Exited (0) 3 days ago strange_yonath
174509213b30 hadoop-build-1001 "/bin/bash" 8 days ago Exited (255) 7 days ago hadoop-c
fa82c595e214 1d1a162a72a6 "/bin/bash" 8 days ago Exited (0) 8 days ago agitated_edison
4b07fcc45271 python "python3" 2 weeks ago Exited (255) 8 days ago pyC
1cc06f213eb7 abee520343a4 "/bin/sh -c 'apt-get…" 3 months ago Exited (100) 3 months ago compassionate_kowalevski
8c1eb67f7325 1b1c7b120b48 "/bin/sh -c 'cd /opt…" 3 months ago Exited (255) 3 months ago dreamy_jepsen
1406d7476a28 1180f37ef8b1 "/bin/sh -c 'mkdir -…" 3 months ago Exited (35) 3 months ago upbeat_taussig
e88bcf7743e2 f0f5acc11f91 "/bin/sh -c 'apt-get…" 3 months ago Exited (100) 3 months ago tender_cohen
I had made some changes in a container which was created using one of hadoop image as below
debian#osboxes:~$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
python latest eeadc22d21a9 2 weeks ago 933MB
hadoop-build-1001 latest 1d1a162a72a6 3 months ago 2.02GB
hadoop-build latest 5c1480006f78 3 months ago 1.96GB
ubuntu xenial 5f2bf26e3524 3 months ago 123MB
I powered off the machine and rebooted etc.
I want to know that I made changes to one of the containers say in above output of docker ps -a the container was
25bfc74fab3b 1d1a162a72a6 "/bin/bash" 2 days ago Exited (1)
how can after a reboot etc I connect or start the same container again.
What I am currently doing is
docker run -it 1d1a162a72a6 /bin/bash
I make some changes and run a few python programms. But after a reboot all the changes are gone that means those python files which I had edited I have to copy paste them again in the container and do every thing from scratch.I have to install vim each time and then each time I do apt-get update install softwares and then check edit the programe.
I am not able to understand how to go back to same container which was created moments ago before exiting. What mistake I am doing here?
docker run always starts a new container. You can re-attach to the existing container provided it wasn't started with --rm using docker start:
docker start -ai <CONTAINER_ID>
Notice that you should use container ID, not image ID as a parameter to docker start. See more in Docker help:
$ docker start --help
Usage: docker start [OPTIONS] CONTAINER [CONTAINER...]
Start one or more stopped containers
Options:
-a, --attach Attach STDOUT/STDERR and forward signals
--detach-keys string Override the key sequence for detaching a container
-i, --interactive Attach container's STDIN
I had made some changes in a container... after a reboot all the changes are gone that means those python files which I had edited I have to copy paste them again in the container and do every thing from scratch.I have to install vim each time and then each time I do apt-get update install softwares and then check edit the programe.
Containers are, by definition, immutable. What you were editing was a temporary runtime layer. If the container stopped or exited, you lose those changes.
In order to preserve changes, you must edit the Dockerfile and rebuild the container in order to create those immutable layers that the image contains
Realistically, you'd use a volume mount from the host into the container and edit code locally, which syncs into the container. No need for text editors or repeated apt updates

Docker info shows containers but docker container ls doesn't

When I run docker info, it shows that I have 18 containers running.
% docker info
Client:
Debug Mode: false
Server:
Containers: 18
Running: 18
Paused: 0
Stopped: 0
Images: 9
...
I want to delete these containers, but when I run docker container ls -a, it shows an empty list. How can I find them?
These containers are not allowing me to delete images.
% docker rmi -f 1e94481e8f30
Error response from daemon: conflict: unable to delete 1e94481e8f30 (cannot be forced) - image is being used by running container 7e9b08a0007b
You are probably running kubernetes.
After stopping kubernetes, to remove all stopped containers and all images without at least one container associated to them:
docker system prune --all
To stop all running containers
docker stop $(docker ps -a -q)
To remove all containers
docker rm $(docker ps -a -q)
You should be able to delete all images after that
Those 18 containers belong to Kubernetes. You can check this by going to the Preferences > Kubernetes > Check Show system containers (advanced).
After that, just run docker container ls -a again and you will see those 18 containers.
These are the containers you are not seeing unless you check that option:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
6c3c65d4bcf4 a8c3d87a58e7 "/compose-controller…" 3 minutes ago Up 3 minutes k8s_compose_compose-6c67d745f6-sr4zj_docker_34e7ef25-166e-11ea-857d-025000000001_2
663d6419ce76 eb516548c180 "/coredns -conf /etc…" 3 minutes ago Up 3 minutes k8s_coredns_coredns-6dcc67dcbc-2twts_kube-system_0c8d1f5f-166e-11ea-857d-025000000001_2
d04a4caf922d eb516548c180 "/coredns -conf /etc…" 3 minutes ago Up 3 minutes k8s_coredns_coredns-6dcc67dcbc-chk2l_kube-system_0c8df4b4-166e-11ea-857d-025000000001_2
324d5d216b07 f3591b2cb223 "/api-server --kubec…" 3 minutes ago Up 3 minutes k8s_compose_compose-api-57ff65b8c7-svv9t_docker_34e161f1-166e-11ea-857d-025000000001_2
f9f74acd5ab6 849af609e0c6 "/usr/local/bin/kube…" 3 minutes ago Up 3 minutes k8s_kube-proxy_kube-proxy-w4x7l_kube-system_0c95526c-166e-11ea-857d-025000000001_2
3cbfa75f1466 k8s.gcr.io/pause:3.1 "/pause" 3 minutes ago Up 3 minutes k8s_POD_compose-6c67d745f6-sr4zj_docker_34e7ef25-166e-11ea-857d-025000000001_2
f11f1cc4bbba k8s.gcr.io/pause:3.1 "/pause" 3 minutes ago Up 3 minutes k8s_POD_coredns-6dcc67dcbc-chk2l_kube-system_0c8df4b4-166e-11ea-857d-025000000001_2
cbb52fdaf130 k8s.gcr.io/pause:3.1 "/pause" 3 minutes ago Up 3 minutes k8s_POD_coredns-6dcc67dcbc-2twts_kube-system_0c8d1f5f-166e-11ea-857d-025000000001_2
5ad88766f27d k8s.gcr.io/pause:3.1 "/pause" 3 minutes ago Up 3 minutes k8s_POD_compose-api-57ff65b8c7-svv9t_docker_34e161f1-166e-11ea-857d-025000000001_2
9f326ea7db5d k8s.gcr.io/pause:3.1 "/pause" 3 minutes ago Up 3 minutes k8s_POD_kube-proxy-w4x7l_kube-system_0c95526c-166e-11ea-857d-025000000001_2
ea36a7a0d248 f1e3e5f9f93e "kube-scheduler --bi…" 3 minutes ago Up 3 minutes k8s_kube-scheduler_kube-scheduler-docker-desktop_kube-system_ef4d089e81b94aa15841e51ed8c41712_2
f3dfa711ea0f 1e94481e8f30 "kube-apiserver --ad…" 3 minutes ago Up 3 minutes k8s_kube-apiserver_kube-apiserver-docker-desktop_kube-system_b1dff398070b11d23d8d2653b78d430e_2
1e5cf76eaf20 2c4adeb21b4f "etcd --advertise-cl…" 3 minutes ago Up 3 minutes k8s_etcd_etcd-docker-desktop_kube-system_3773efb8e009876ddfa2c10173dba95e_2
d631fca0d4ac k8s.gcr.io/pause:3.1 "/pause" 3 minutes ago Up 3 minutes k8s_POD_kube-scheduler-docker-desktop_kube-system_ef4d089e81b94aa15841e51ed8c41712_2
20242b387b05 36a8001a79fd "kube-controller-man…" 3 minutes ago Up 3 minutes k8s_kube-controller-manager_kube-controller-manager-docker-desktop_kube-system_86e291a2049db314a5eca69a05cf6ced_2
b32c2d63090f k8s.gcr.io/pause:3.1 "/pause" 3 minutes ago Up 3 minutes k8s_POD_kube-apiserver-docker-desktop_kube-system_b1dff398070b11d23d8d2653b78d430e_2
4d6e49f60ead k8s.gcr.io/pause:3.1 "/pause" 3 minutes ago Up 3 minutes k8s_POD_etcd-docker-desktop_kube-system_3773efb8e009876ddfa2c10173dba95e_2
9035ccdcae5d k8s.gcr.io/pause:3.1 "/pause" 3 minutes ago Up 3 minutes k8s_POD_kube-controller-manager-docker-desktop_kube-system_86e291a2049db314a5eca69a05cf6ced_2

How do I properly shut down these k8 containers?

I was doing some self-learning with Kubernetes and I have these containers that will not permanently shut down:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
8e08ecdf12c2 fadcc5d2b066 "/usr/local/bin/kube…" About a minute ago Up About a minute k8s_kube-proxy_kube-proxy-mtksn_kube-system_08f1149a-4ac6-11e9-bea5-080027db2e61_0
744282ae4605 40a817357014 "kube-controller-man…" About a minute ago Up About a minute k8s_kube-controller-manager_kube-controller-manager-minikube_kube-system_17eea6fd9342634d7d40a04d577641fd_0
0473a3e3fedb f59dcacceff4 "/coredns -conf /etc…" About a minute ago Up About a minute k8s_coredns_coredns-86c58d9df4-l6mdf_kube-system_08f82a2f-4ac6-11e9-bea5-080027db2e61_0
6e9a0a03dff1 4689081edb10 "/storage-provisioner" About a minute ago Up About a minute k8s_storage-provisioner_storage-provisioner_kube-system_0a7e1c9d-4ac6-11e9-bea5-080027db2e61_0
4bb4356e57e7 dd862b749309 "kube-scheduler --ad…" About a minute ago Up About a minute k8s_kube-scheduler_kube-scheduler-minikube_kube-system_4b52d75cab61380f07c0c5a69fb371d4_0
973e42e849c8 f59dcacceff4 "/coredns -conf /etc…" About a minute ago Up About a minute k8s_coredns_coredns-86c58d9df4-l6hqj_kube-system_08fd4db1-4ac6-11e9-bea5-080027db2e61_1
338b58983301 9c16409588eb "/opt/kube-addons.sh" About a minute ago Up About a minute k8s_kube-addon-manager_kube-addon-manager-minikube_kube-system_5c72fb06dcdda608211b70d63c0ca488_4
3600083cbb01 k8s.gcr.io/pause:3.1 "/pause" About a minute ago Up About a minute k8s_POD_kube-addon-manager-minikube_kube-system_5c72fb06dcdda608211b70d63c0ca488_3
97dffefb7a4b ldco2016/multi-client "nginx -g 'daemon of…" About a minute ago Up About a minute k8s_client_client-deployment-6d89489556-mgznt_default_1f1f77f2-4c5d-11e9-bea5-080027db2e61_1
55224d847c72 k8s.gcr.io/pause:3.1 "/pause" About a minute ago Up About a minute k8s_POD_kube-proxy-mtksn_kube-system_08f1149a-4ac6-11e9-bea5-080027db2e61_3
9a66d39da906 3cab8e1b9802 "etcd --advertise-cl…" About a minute ago Up About a minute k8s_etcd_etcd-minikube_kube-system_8490cea1bf6294c73e0c454f26bdf714_6
e75a57524b41 k8s.gcr.io/pause:3.1 "/pause" About a minute ago Up About a minute k8s_POD_etcd-minikube_kube-system_8490cea1bf6294c73e0c454f26bdf714_5
5a1c02eeea6a fc3801f0fc54 "kube-apiserver --au…" About a minute ago Up About a minute k8s_kube-apiserver_kube-apiserver-minikube_kube-system_d1fc269f154a136c6c9cb809b65b6899_3
2320ac2ab58d k8s.gcr.io/pause:3.1 "/pause" About a minute ago Up About a minute k8s_POD_kube-apiserver-minikube_kube-system_d1fc269f154a136c6c9cb809b65b6899_3
0195bb0f048c k8s.gcr.io/pause:3.1 "/pause" About a minute ago Up About a minute k8s_POD_kube-scheduler-minikube_kube-system_4b52d75cab61380f07c0c5a69fb371d4_3
0664e62bf425 k8s.gcr.io/pause:3.1 "/pause" About a minute ago Up About a minute k8s_POD_coredns-86c58d9df4-l6mdf_kube-system_08f82a2f-4ac6-11e9-bea5-080027db2e61_4
546c4195391e k8s.gcr.io/pause:3.1 "/pause" About a minute ago Up About a minute k8s_POD_kube-controller-manager-minikube_kube-system_17eea6fd9342634d7d40a04d577641fd_4
9211bc0ce3f8 k8s.gcr.io/pause:3.1 "/pause" About a minute ago Up About a minute k8s_POD_client-deployment-6d89489556-mgznt_default_1f1f77f2-4c5d-11e9-bea5-080027db2e61_3
c22e7c931f46 k8s.gcr.io/pause:3.1 "/pause" About a minute ago Up About a minute k8s_POD_coredns-86c58d9df4-l6hqj_kube-system_08fd4db1-4ac6-11e9-bea5-080027db2e61_3
e5b9a76b8d68 k8s.gcr.io/pause:3.1 "/pause" About a minute ago Up About a minute k8s_POD_storage-provisioner_kube-system_0a7e1c9d
What is the most efficient way to shut them all down in one go and stop them from restarting?
I ran a minikube stop and that took care of it, but I am unclear as to whether that was the proper way to do it.
This looks like the output of docker ps. When using Kubernetes, you should generally not worry about things at the Docker level, and what containers Docker is running. Some of the containers that are running are part of the Kubernetes API itself, so you should only shut these down if you plan to shut down Kubernetes itself. If you plan to shut down Kubernetes itself, the right way to shut it down depends on how you started it (minkube, GKE, etc?). If you don't plan on shutting down Kubernetes itself, but want to shut down any extra containers that Kubernetes is running on your behalf (as opposed to containers that are running as part of the Kubernetes system itself) you could run kubectl get pods --all-namespaces to see all "user-land" pods that are running. "Pod" is the level of abstraction that you primarily interact with when using Kubernetes, and the specific Docker processes that are running is not something you should need to worry about.
EDIT: I see you updated your question to say that you ran minikube stop. Yes, that is the correct way to do it, nice!

Resources