Why does a container running a console app simply exits after starting - docker

I want to run a simple dotnet core console app in a container interactively. I am not able to do that and the container simply starts and then exits immediately without fully running the program.
All that the console app has is the followng three statements.
Console.WriteLine("Hello World!");
var readText = Console.ReadLine(); // Wait for me to enter some text
Console.WriteLine(readText);
The last two lines make it interactive.
When I run the container, it prints the Hello World! But then it immediately exits, without waiting for me enter some text. Why? what am I missing?
I was able to run a dotnet core web app in the container in a similar manner, and I am able to map the ports outside and within the container to successfully browse the web app. But when it comes to console app, I am stumped.
I guess there could be something very simple that I am missing. Driving me nuts
The steps to reproduce as described below.
Launch Vs2019 and create a new .net core console project.
Add a couple of statements to make it interactive.
Add Docker support to the created project. Right click the project, Add -> select Container Orchestrator Support
Now visual studio creates a set of files and changes the csproj file as well.
In a powershell navigate to folder having the solution file. Run the command "docker-compose up"
Once the images are built, and containers are up and running, we can see start to see the problem.
We can see here Hello-World!. But it does not wait for me to type something. Type docker ps -a and see a container that is exited. When I try to start that using docker start -i or docker start -a, the container starts but exits immediately. What should I do to make the container run so that i can type something for my app running in it read? You can see in the docker desktop as well. Even if I start them(using the start button available with the docker desktop UI against each container), it simply stops again.
I had run web apps in containers. With proper port mapping, a web app running inside of a container can be accessed from outside. I had created a dotnet core web app in similar lines described above, modified the docker-compose file to include port mapping(shown below) and when I do docker-compose up, the app is up and running. But with a console app, the container simply exits.

By default, you don't have an interactive TTY when the container is started with docker-compose up.
You need to add that to your service:
stdin_open: true
tty: true

I found, Docker compose run is an alternative to up command.
docker-compose run <service name from docker-compose.yml file>
Specifically, for my application, the docker compose file looks as follows.
version: '3.4'
services:
dokconsoleapp:
image: ${DOCKER_REGISTRY-}dokconsoleapp
build:
context: .
dockerfile: DokConsoleApp/Dockerfile
# stdin_open: true
# tty: true
Note the last two lines are commented out.
Now if I run
docker-compose run dokconsoleapp
The container runs till the end of the program, interactively waiting for me to type an input after Hello-World!.
So statements
stdin_open: true
tty: true
are not needed when you use run with docker-compose instead of up

Related

Console application in docker not working

I am trying to learn docker practically. To start with I have created a simple .net core 3.1 console application. This application simply writes a message in a text file in a specific location. I have created a docker image from it and then docker container from the image. When I run the docker container, it runs and stops successfully.
The docker file:
FROM mcr.microsoft.com/dotnet/aspnet:3.1
COPY bin/Release/netcoreapp3.1/publish App/
WORKDIR /App
ENTRYPOINT ["dotnet", "ConsoleApp1.dll"]
I also checked the logs using command "docker logs container_id". But it returns nothing.
Am I missing anything?
Docker runs a process inside a container, when that process ends the container stops and end too. As the process in your container only writes something and exits, the container exits and stops too.
Also the text file is written in the container file system. So you will not be able to see it in your host, unless you use a volume. Try printing the string to standard output instead

Visual Studio Docker Tools - where do the images go and where do the containers run

In Docker for Desktop, I have the docker-for-desktop cluster selected (it's running on MobyLinuxVM on HyperV).
However, when I go to Visual Studio and build / debug a project that has Docker support, then run "docker ps -a" from a command line, I do not see another container created. Does Visual Studio deploy a container by default to a separate cluster somehow?
I set the docker-compose project as the startup project (not sure why it wasn't already). Also, there were problems with the docker-compose.yml formatting I had caused while struggling to understand why it wasn't running.
Note that if you don't specify a network, docker-compose will automatically create a bridge network that all the services in docker-compose will share (they have to use this network's gateway to see services on the other containers). Across builds, it often will increment the bridge network gateway IP's second octet. My workaround to having to change the gateway IP constantly was making a user defined bridge network and adding the following to the bottom of docker-compose.yml (it gets used by all the services in the file):
networks:
default:
external:
name: mybridgenetwork
Another helpful thing is that I was able to pass in multiple environment variables to a single service in docker-compose.yml like this:
services:
myservice1:
"envVariable1" : "somevalue"
"envVariable2" : "somevalue"
Also, I was able to make it pull from the local docker registry (if Kubernetes is enabled on Docker Desktop, then you can set the context that docker-compose will run against by right clicking the Docker icon, going to the kubernetes submenu, then selecting the context. If you do not see a docker-desktop context there, issue a "docker swarm init" command from powershell or command (run as administrator).
Then, add the following to docker-compose.yml's image lines in order to get docker compose to detect the built images in the local Docker for Windows registry:
image: ${DOCKER_REGISTRY}TheImageNameYouWant
Note that the container names will be called dockercompose[some random string]_[the image name above]. You can see these running containers in powershell (as admin) by doing "docker ps -a".
One last thing - make sure that Visual Studio / Tools / Options / Container Tools has "Automatically kill containers on solution close" checked. If you think you might want to change docker-compose.yml before it runs the first time on startup, uncheck the first two checkboxes.

docker-compose run existing container

I'm running an application under development with docker-compose.
I have a "web" service running a python Flask web application. This service depends on other ones (database, cache, ...).
I need to run the "web" main service interactively in order to get access to a debugger (ipdb).
I found out that the way to do this would be
docker-compose run --name my-app.web --service-ports web
When I exit this container and try to run it again with the same command I got this error:
ERROR: Cannot create container for service web: Conflict. The container name "/my-app.web" is already in use by container "4fed84779bb02952dedb8493a65bd83b1a6664f066183233e8c8b4dc62291643". You have to remove (or rename) that container to be able to reuse that name.
How can I start again this container without creating a new one ?
Or is it the correct way to create new containers each time I need to start this application ?
Or did I miss something to be able to start one of the service interactively ?
As you're setting a custom name, docker-compose run doesn't remove the container once the execution is completed. To enable this behavior use the option --rm:
docker-compose run --rm --name my-app.web --service-ports web
You can also remove the container manually to be able to run it again:
docker rm my-app.web
This is not necessary if you don't set a custom name.

celery won't connect to rabbitmq broker for kubernetes

I am currently trying to deploy a basic task queue and frontend using celery, rabbitmq and flower on Kubernetes (and minikube). I am following the example here:
https://github.com/kubernetes/kubernetes/tree/release-1.3/examples/celery-rabbitmq
I can get everything to work following the instructions; however, when I run docker build on the Dockerfile in ./celery-app-add, push the image to my own repository and replace endocode/celery-app-add with <mine>/celery-app-add, I can't get the example to run anymore. I am assuming that the Dockerfile in source control is wrong because if I pull the endocode/celery-app-add image and run bash in the image, it loads in as the root user (as opposed to user with <mine>/celery-app-add Dockerfile).
After booting up all of the containers and services, I can see the following in the logs:
2016-08-18T21:05:44.846591547Z AttributeError: 'ChannelPromise' object has no attribute '__value__'
The celery logs show:
2016-08-19T01:38:49.933659218Z [2016-08-19 01:38:49,933: ERROR/MainProcess] consumer: Cannot connect to amqp://guest:**#rabbit:5672//: [Errno -2] Name or service not known.
If I echo RABBITMQ_SERVICE_SERVICE_HOST within the container, it appears as the same host as indicated in the rabbitmq-service after running kubectl get services.
I am not really sure where to go from here. Any suggestions are appreciated. Also, I added USER root (won't run this in production, don't worry) to my Dockerfile and still ran into the same issues above. docker history endocode/celery-app-add hasn't been too helpful either.
Turns out the problem is based around this celery issue. Celery prefers to use CELERY_BROKER_URL over anything that can be set in the app configuration. To fix this, I unset CELERY_BROKER_URL in the Dockerfile and it picked up my configuration correctly.

How can I mount a file in a container, that isn't available before first run?

I'm trying to build a Dockerfile for a webapp that uses a file-based database. I would like to be able to mount the file from the host*
The file is in the root of the complete software install, so it's not really ideal to mount that complete dir.
Another problem is that before the first use, the database-file isn't created yet. A first time user won't have a database, but another user might. I can't 'mount' anything during a build** I believe.
It could probably work like this:
First/new database start:
Start the container (without mount).
The webapp creates a database.
Stop the container
subsequent starts:
Start the container using a -v to mount the file
It would be better if that extra start/stop isn't needed for a user. Even if it is, I'm still looking for a way to do this userfriendly, possibly having 2 'methods' of starting it (maybe I can define a first-boot thing in docker-compose as well as a 'normal' method?).
How can I do this in a simpel way, so that it's clear for any first time users?
* The reason is that you can copy your Dockerfile and the database file as a backup, and be up and running with just those 2 elements.
** How to mount host volumes into docker containers in Dockerfile during build
One approach that may work is:
Start the database in the build file in such a way that it has time to create the default file before exiting.
Declare a VOLUME in the Dockerfile for the file after the above instruction. This will cause the file to be copied into the volume when a container is started, assuming you don't explicitly provide a host path
Use data-containers rather than volumes. So the normal usage would be:
docker run --name data_con my_db echo "my_db data container"
docker run -d --volumes-from data_con my_db
...
The first container should exit immediately but set up the volume that is used in the second container.
I was trying to achieve something similar and managed to do it by mounting a folder, instead of the file, and creating a symlink in the Dockerfile, initially pointing to a non-existing file:
docker-compose.yml
version: '3.0'
services:
bash:
build: .
volumes:
- ./data:/data
command: ['bash']
Dockerfile
FROM bash:latest
RUN ln -s /data/.bash_history /root/.bash_history
Then you can run the container with:
docker-compose run --rm bash
With this setup, you can push an empty "data" folder into the repository for example (and exclude its content with .gitignore). In the first run, inside the container /root/.bash_history will be a "broken" symlink, pointing to a file that does not exist. When you exit the shell, bash will write the history to /root/.bash_history, which will end up in /data/.bash_history.
This is probably not the correct approach.
If you have multiple containers that are trying to share some information through the file-system, you should probably let them share some directory.
That way, the flow is simple and very hard to get wrong.
You simply mount the same directory, say /data (from the host's perspective) into all the containers that are trying to use it.
When an application starts and it can't find anything inside that directory, it can gracefully stop and exit with a code that says: "Cannot start, DB not initialized yet".
You can then configure some mechanism with a growing timeout to try and restart that container until you're successful.
On the other hand, the app that creates the DB can start and create it inside the directory or find an existing file to use.

Resources