Pionion Docker is not working on background - docker

I am working on pionion for streaming.(https://pionion.github.io/docs/deploy/docker) I follwed their documentation and it worked fine for me. The command I used to run is "docker-compose up".
As I can't run this docker containers everytime so I run this docker containers on background using "docker-compose up -d" command. But it doesn't worked for me.
How can I run it on background and make it work as previous?
I want to up this streaming server for all time on background. So that I can use this without logging on the server.

Related

Can I shell into a worker dyno on Heroku?

I can shell into a Heroku app using the CLI command:
heroku run -a app-name bash
This works beautifully, however, I cannot seem to be able to specify which dyno I want to shell into. I have one web and one worker dyno, each with their own Docker image, and the run command always goes into the web.
Is there a solution to shell into a worker dyno?
I found the answer myself. Based on Docker's documentation:
If your app is composed of multiple Docker images, you can target the process type when creating a one-off dyno:
$ heroku run bash --type=worker
This works exactly as expected.

Differences between detached mode and background in docker

Running docker run with a -d option is described as running the container in the background. This is what most tutorials do when they don't want to interact with the container. On another tutorial I saw the use of bash style & to send the process to the background instead of adding the -d option.
Running docker run -d hello_world only outputs the container ID. On the other hand docker run hello_world & still gives me the same output as if i had run docker run hello_world.
If I do the both experiments with docker run nginx I get the same behavior on both (at least as far as I can see), and both show up if I run docker ps.
Is the process the same in both cases(apart from the printing of the ID and output not being redirected with &)? If not, what is going on behind the scenes in each?
docker is designed as C-S architecture: docker client, docker daemon(In fact still could fined to container-d, shim, runc etc).
When you execute docker run, it will just use docker client to send things to docker daemon, and let daemon call runc etc to start a container.
So:
docker run -d: It will let runc run container in background, you can use docker logs $container_name to see all logs later, the background happend on server side.
docker run &: It will make the linux command run at background, which means the docker run will be in background, this background run on client side. So you still can see stdout etc in terminal. More, if you leave the terminal(even you bash was set as nohup), you will not see it in terminal, you still need docker logs to see them.

The nginx_phpfpm container goes unhealth when running on ECS as task

Why is it really happening on AWS ECS? I have tested the docker image locally before pushing it to ECR. It works so smooth and is healthy.
Now, if I send the same image to ECR and run it as task, setting up the task definition in ECS. Its stopping and running back every time after few period of time.
The health status shows unhealthy. I am not using the ALB for health check but using the docker health check service built in with ECS. I though it might be the command issue so tried all the options hinted by people online.
CMD-SHELL, curl -f http://localhost/ || exit 1
But nothing seems working here.
What might be the exact cause the docker image running so well locally
does not run on ECS?
I even though, may be its not running on background so added this command on Entrypoint setting in task definition of ECS.
systemctl start nginx

Docker connection refused when started with -ti bash

I am new to docker and I tried to run the linuxconfig/lemp-php7 image. Everything worked fine and I could access the nginx web server installed on the container. To run this image I used this command:
sudo docker run linuxconfig/lemp-php7
When I tried to run the image with the following command to gain access over the container through bash I couldn't connect to nginx and I got the connection refused error message. Command: sudo docker run -ti linuxconfig/lemp-php7 bash
I tried this several times so I'm pretty sure it's not any kind of coincidence.
Why does this happen? Is this a problem specific to this particular image or is this a general problem. And how can I gain access to the shell of the container and access the web server at the same time?
I'd really like to understand this behavior to improve my general understanding of docker.
docker run runs the specified command instead of what that container would normally run. In your case, it appears to be supervisord, which presumably in turn runs the web server. So you're preventing any of that from happening.
My preferred method (except in cases where I'm trying to debug cases where the container won't even start properly) is to do the following after running the container normally:
docker exec -i -t $CONTAINER_ID /bin/bash

Docker Error: push is already in progress

I'm trying to push a container that was interrupted previously by a network drop out. But I get this error:
Error: push rimian/ruby-node-npm is already in progress
But when I run docker ps I don't see anything running.
What shall I do?
Restart the docker service on which you are running the docker.
With ubuntu:
sudo service docker restart
Just wait.
I had this once, too, and the problem is that the push is still running in the background, hence you can't do another one.
So just wait, and the problem will disappear automatically after some time.
The fact that you do not see anything running with docker ps is that this command only shows docker containers, not internal docker processes. And pushing an image is not run by a container.

Resources