I'm trying to use confluent for the first time and after less than a minute the 'broker' process exits with code 137.
How can I fix that?
The default Docker memory is 2GB.
Confluent needs at least 8 GB to operate properly.
Change this in the Docker settings.
Confluent documentation. See docker - section 2
Related
This question already has an answer here:
Why do I have to delete docker containers?
(1 answer)
Closed 1 year ago.
I am new to Docker and just getting started. I pulled a basic ubuntu image and started a few containers with it and stopped them. When I run the command to list all the docker containers (even the stopped ones) I get an output like this:
> docker container ls -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
099c42011f24 ubuntu:latest "/bin/bash" 6 seconds ago Exited (0) 6 seconds ago sleepy_mccarthy
dde61c10d522 ubuntu:latest "/bin/bash" 8 seconds ago Exited (0) 7 seconds ago determined_rosalind
cd1a6fa35741 ubuntu:latest "/bin/bash" 9 seconds ago Exited (0) 8 seconds ago unruffled_lichterman
ff926b6eba23 ubuntu:latest "/bin/bash" 10 seconds ago Exited (0) 10 seconds ago cool_rosalind
8bd50c2c4729 ubuntu:latest "/bin/bash" 12 seconds ago Exited (0) 11 seconds ago cranky_darwin
My question is, is there a reason why docker does not delete the stopped containers by default?
The examples you've provided show that you're using an Ubuntu container just to run bash. While this is fairly common pattern while learning Docker, it's not what docker is used for in production scenarios, which is what Docker cares about and is optimizing for.
Docker is used to deploy an application within a container with a given configuration.
Say you spool up a database container to hold information about your application. Then your docker host restarts for some reason, and that database disappears by default. That would be a disaster.
It's therefore much safer for Docker to assume that you want to keep your containers, images, volumes, and so on, unless you explicitly ask for them to be removed and decide this is what you want when you start them, with docker run --rm <image> for example.
In my opinion, it may have some reasons. Consider below condition:
I build my image and start the container (production environment, for some reason I stop the current container, do some changes to image and run another instance, so new container with new name is running.
I see new container does not work properly as expected, so as now I have the old container, I can run the old one and stop the new so the clients will not face any issues.
But what if containers were automatically deleted if they were stopped?
Simple answer, I would have lost my clients (even my job) simply:) And one person would be added to unemployed people :D
As #msanford mentioned, Docker assumes you want to keep your data, volumes, etc. so you'll probably re-use them when needed.
Since Docker is used to deploy and run applications (as simple as WordPress with MySQL but with some differences installing on Shared Hosting), usually it's not used for only running bash.
Surely it's good to learn Docker in the first steps by running things like bash or sh to see the contents of container.
What would prevent docker service create ... command from completing? It never returns to the command prompt or reports any kind of error. Instead, it repeatedly switches between, "new", "assigned", "ready", and "starting".
Here's the command:
docker service create --name proxy -p 80:80 -p 443:443 -p 8080:8080 --network proxy -e MODE=swarm vfarcic/docker-flow-proxy
This is from "The DevOps 2.1 Toolkit - Docker Swarm", a book by Viktor Farcic, and I've come to a problem I can't get past...
I'm trying to run a reverse proxy service on a swarm of three nodes, all running locally on a LinuxMint system, and docker service create never returns to the command prompt. It sits there and loops between statuses and always says overall progress: 0 out of 1 tasks...
This is on a cluster of three nodes all running locally, created with docker-machine with one manager and two workers.
While testing, I've successfully created other overlay networks and other services using the same cluster of nodes.
As far as I can tell, the only significant differences between this and the other services I created for these nodes are:
There are three published ports. I double-checked the host and there's nothing else listening on any of those ports.
I don't have a clue what the vfarcic/docker-flow-proxy does.
I'm running this on a (real, not VirtualBox/VMWare/Hyper-V) Linux Mint 19 system (based on Ubuntu 18.04) with Docker 18.06. My host has 16Gb RAM and plenty of spare disk space. It was set up for the sole purpose of learning Docker after having too many problems on a Windows host.
Update:
If I Ctrl-C in the terminal where I tried to create the service, I get this message:
Operation continuing in background.
Use docker service ps p7jfgfz8tqemrgbg1bn0d06yp to check progress.
If I use docker service ps --no-trunc p473obsbo9tgjom1fd61or5ap I get:
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
dbzvr67uzg5fq9w64uh324dtn proxy.1 dockerflow/docker-flow-proxy:latest#sha256:e4e4a684d703bec18385caed3b3bd0482cefdff73739ae21bd324f5422e17d93 node-1 Running Starting 11 seconds ago
vzlttk2xdd4ynmr4lir4t38d5 \_ proxy.1 dockerflow/docker-flow-proxy:latest#sha256:e4e4a684d703bec18385caed3b3bd0482cefdff73739ae21bd324f5422e17d93 node-1 Shutdown Failed 16 seconds ago "task: non-zero exit (137): dockerexec: unhealthy container"
q5dlvb7xz04cwxv9hpr9w5f7l \_ proxy.1 dockerflow/docker-flow-proxy:latest#sha256:e4e4a684d703bec18385caed3b3bd0482cefdff73739ae21bd324f5422e17d93 node-1 Shutdown Failed 49 seconds ago "task: non-zero exit (137): dockerexec: unhealthy container"
z86h0pcj21joryb3sj59mx2pq \_ proxy.1 dockerflow/docker-flow-proxy:latest#sha256:e4e4a684d703bec18385caed3b3bd0482cefdff73739ae21bd324f5422e17d93 node-1 Shutdown Failed about a minute ago "task: non-zero exit (137): dockerexec: unhealthy container"
uj2axgfm1xxpdxrkp308m28ys \_ proxy.1 dockerflow/docker-flow-proxy:latest#sha256:e4e4a684d703bec18385caed3b3bd0482cefdff73739ae21bd324f5422e17d93 node-1 Shutdown Failed about a minute ago "task: non-zero exit (137): dockerexec: unhealthy container"
So, no mention of node-2 or node-3 and I don't know what the error message means.
If I then use docker service logs -f proxy I get the following messages repeating every minute or so:
proxy.1.l86x3a6md766#node-1 | 2018/09/11 12:48:46 Starting HAProxy
proxy.1.l86x3a6md766#node-1 | 2018/09/11 12:48:46 Getting certs from http://202.71.99.195:8080/v1/docker-flow-proxy/certs
I can only guess that it fails to get whatever certs it's looking for. There doesn't seem to be anything at that address, but an IP address lookup shows that it leads to my ISP. Where's that coming from?
There may be more than one motives why docker can't create the service. The simplest to see why is to see the last status of the service by running:
docker service ps --no-trunc proxy
where you have an ERROR column that describes the last error.
Alternatively you can run
docker service logs -f proxy
to see what's going on inside the proxy service.
This was an old question, but I was experimenting with docker and had this same issue.
For me this issue was resolved by publishing the ports before stipulating the name.
I believe docker functions after stipulating the name as: --name "name" "image to use" "commands to pass to image on launch"
So it continues attempting to pass those options to the container as commands and fails instead of using them as options when running the service.
Add -t flag into command to prevent container(s) exit.
The same problem happened to me as well.
I think in my case, the image had some kind of error. I did the following experiments:
I first pulled the image, and created the service
docker service create --name apache --replicas 5 -p 5000:80 [image repo]
The same error from the question happened.
Like the first answer above, I tried a different order
docker service create --replicas 5 -p 5000:80 --name apache [image repo]
Same error occurred
I tried to create the service w/o the image downloaded locally, but the same error occurred.
I tried a different image, which worked before, then the service was created successfully.
The first image I used was swedemo/firstweb. (This is NOT my repo, it's my prof's)
I didn't examine what part of this image was the problem but hope this can give clue to others.
For me, it was some startup script issues causing the container to exit with 0 code.
As my docker service has to have at least one replica (given as constraints), docker swarm try to recreate it. The container fail right after being relaunched by docker daemon hence causing the loop.
Container running on Ubuntu 16.04
Below how I do (Random name sad_wiles created):
docker run -it -d alpine /bin/ash
docker run -it -d alpine /bin/sh
docker run -ti -d alpine
docker start sad_wiles running fine and I can enter & exit sh
However, docker stop sad_wiles giving exit code 137. Below is the log:
2017-11-25T23:22:25.301992880+08:00 container kill 61ea1f10c98e2462f496f9048dcc6b45e536d3f7ba14747f7f22b96afb2db60d (image=alpine, name=sad_wiles, signal=15)
2017-11-25T23:22:35.302560688+08:00 container kill 61ea1f10c98e2462f496f9048dcc6b45e536d3f7ba14747f7f22b96afb2db60d (image=alpine, name=sad_wiles, signal=9)
2017-11-25T23:22:35.328791538+08:00 container die 61ea1f10c98e2462f496f9048dcc6b45e536d3f7ba14747f7f22b96afb2db60d (exitCode=137, image=alpine, name=sad_wiles)
2017-11-25T23:22:35.547890765+08:00 network disconnect 3b36d7a71af5a43f0ee3cb95c159514a6d5a02d0d5d8cf903f51d619d6973b35 (container=61ea1f10c98e2462f496f9048dcc6b45e536d3f7ba14747f7f22b96afb2db60d, name=bridge, type=bridge)
2017-11-25T23:22:35.647073922+08:00 container stop 61ea1f10c98e2462f496f9048dcc6b45e536d3f7ba14747f7f22b96afb2db60d (image=alpine, name=sad_wiles)
This is not an error as mentioned in the comment by #yament You'll see this exit code when you do a docker stop and the initial graceful stop fails and docker has to do a sigkill. As mentioned here, it's a linux standard: 128 + 9 = 137 (9 coming from SIGKILL).
You can increase your memory limit in Docker App > Preferences > Advanced on Mac os. As changing this mem_limit=384m to 512m works. Here is additional resunce will help you, Exit Status
If you are curious about how sad_wiles name appeared as your container name, that has been a Docker feature from early days. If you do not specify a name for your Docker container using --name tag with your Docker run command, Docker will create a name for the container based on an open source list of scientist and hackers. You can get its source code from here.
The signal code issue may be due to the memory limit is low for Docker. A github issue was also opened on this. Refer it from here. Try changing the memory allocation for Docker as the comments for the attached github issue recommend.
The documentation of docker ps and docker container ls both says "List containers", but does not mention the other command. Is there a difference between those two commands?
The output looks exactly the same:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
bbe3d7158eaa flaskmysqldockerized_web "python app.py" 5 hours ago Up 18 seconds 0.0.0.0:8082->5000/tcp flaskmysqldockerized_web_1
4f7d3f0763ad mysql "docker-entrypoint..." 6 hours ago Up 18 seconds 0.0.0.0:3307->3306/tcp flaskmysqldockerized_db_1
There is no difference between docker ps and docker container ls. The new command structure (docker container <subcommand>) was added in Docker 1.13 to provider a more structured user experience when using the command line.
To my knowledge, there has not yet been any official announcement to drop support for the old-style commands (like docker ps and others), although it might be reasonable to assume this might happen at some point in the future.
This is described in a blog post accompanying the release of Docker 1.13:
Docker has grown many features over the past couple years and the Docker CLI now has a lot of commands (40 at the time of writing). Some, like build or run are used a lot, some are more obscure, like pause or history. The many top-level commands clutters help pages and makes tab-completion harder.
In Docker 1.13, we regrouped every command to sit under the logical object it’s interacting with. For example list and startof containers are now subcommands of docker container and history is a subcommand of docker image.
docker container list
docker container start
docker image history
These changes let us clean up the Docker CLI syntax, improve help text and make Docker simpler to use. The old command syntax is still supported, but we encourage everybody to adopt the new syntax.
docker ps is shorthand that stands for "docker process status", whilst docker container ls is shorthand for the more verbose docker container list.
As the accepted answer explains, there is no difference in how they work, and docker container ls is the 'newer' command, so you should probably prefer it.
Both commands actually only show running containers by default, which makes the first one (docker ps) a little more confusing as that command on its own isn't really showing 'process status'. To see the status of all containers, add the -a option for 'all' (or use --all), e.g.
docker container ls -a
older
docker ps -a or docker container ps -a
I have created a new docker services and am determining its required resources. Since applying the RAM to a new service is greedy--saying the container can have 8GB of RAM it will get them--I don't want to waste the cluster's resources.
Now I am trying to find out how much RAM a docker run took at its peak.
For example, I created a httpie-image (for the rightfully paranoid, the Dockerfile is also on dockerhub that I execute via:
docker run -it k0pernikus/httpie-docker-alpine HEAD https://stackoverflow.com/
I know that there is a docker stats command, yet it appears to show the current memory usage, and I don't really want to monitor that.
If I run it after the container ended, it will show 0. (To get the container id, I use the d flag.)
$ docker run -itd k0pernikus/httpie-docker-alpine HEAD https://stackoverflow.com/
132a93ffc9e297250b8ca37b2563aa2b5e423e146890fe3383a91a7f26ef990c
$ docker stats 132a93ffc9e297250b8ca37b2563aa2b5e423e146890fe3383a91a7f26ef990c
it will show:
CONTAINER CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
132a93ffc9e297250b8ca37b2563aa2b5e423e146890fe3383a91a7f26ef990c 0.00% 0 B / 0 B 0.00% 0 B / 0 B 0 B / 0 B 0
Yet how much RAM did it consume at maximum?
tl;dr:
Enable memory accounting
cat /sys/fs/cgroup/memory/docker/${CONTAINER_ID}/memory.max_usage_in_bytes
Docker uses cgroups under the hood. We only have to ask the kernel the right question, that is to cat the correct file. For this to work memory accounting has to be enabled (as noted in the docs).
On systemd based systems this is quite straight forward. Create a drop-in config for the docker daemon:
systemctl set-property docker.service MemoryAccounting=yes
systemctl daemon-reload
systemctl restart docker.service
(This will add a little overhead though as each time the container allocates RAM a counter has to be updated. Further details: https://lwn.net/Articles/606004/)
Then by using the full container id that you can discover via docker inspect:
docker inspect --format="{{.Id}}" $CONTAINER_NAME
you can get the maximum memory used:
cat /sys/fs/cgroup/memory/docker/${CONTAINER_ID}/memory.max_usage_in_bytes
The container has to be running for this to work.