Docker: execute a program that requires tty - docker

I have a utility program that depends on terminal characteristics. I want to execute it inside a docker container. (the program is not a interactive program as such. It is an old program that was written that way).
docker run -i -t or docker exec -i -t should open a tty into container. But here is what happens..
user#1755e1f3f735:~/region/primer/cobol_v> kickstop
[Error] Unable to run without terminal device (tty)
user#1755e1f3f735:~/region/primer/cobol_v> tty
not a tty
When -t option to docker command (run/exec) should give a 'tty', the tty commands returns with 'not a tty'. This is puzzling.
I experienced this on a openSuse and fedora23 hosts and images, if that matters. I used 'guake', MATE (Gnome?) terminal emulators for this, with same results.
Is there any solution to this? or this is by design and have to replace/rewrite my utility?

I ran into the same issue, and found "docker exec -ti container script /dev/null" solved the problem.
After login to the container with the above command, I can use screen normally.
Reference: https://github.com/docker/docker/issues/8755

I ran some experiments and here are findings. Hope someone finds them useful.
(docker commands are not complete but just brief)
1. docker run -i -t
> tty
/dev/console
> echo $TERM
xterm
>kickstop
works!!
2. docker -d followed by docker exec -i -t
>tty
not a tty
>echo $TERM
dumb
>kickstop
[Error] Unable to run without terminal device (tty)
3. docker -d followed by docker attach
you get attached to /dev/console. No prompt (because I'm running tail -f xxx.log to keep the container alive). In fact I need to stop my application from another terminal (using docker exec) and stop the container to get back to the prompt (host shell)
4. docker start followed by docker attach
same as above

Related

Command docker start -a <Container ID> do nothing, "Ubuntu 20.04.02 LTS"

i'm trying to run the start command in docker after creating a container.
for example :
$ docker create busybox echo hi there
it gives me the id of that container like 4e59d0fe8584bb4dcaf44dbce100253b6767bf51546edc27f29f39f52ed57957
and when i try to start that container without any flags like: -a flag
it works, but it only gives me that id back again.
but when i try to show the output using the attach -a flag, actually nothing happened, even it didn't give me an error, simply the command still running without anything happened.
i also couldn't kill the command and stop the execution by Ctrl+c, so the only option i had to close the terminal
i tried to make the problem clear as possible as i can
You can run the image via this command:
docker run -it busybox
It goes you to shell environment and you have -i (interactive) -t (tty) terminal, which means you see the terminal.
The default CMD(PID1) for busybox image is sh, see this.
For docker create busybox echo hi there, the COMMAND becomes echo hi there. This means after container starts, it will first execute echo hi there, then as PID1 exit, the container exit too. If you use docker ps, you won't find your container, you could just find your exited container with docker ps -a.
So,
If you intend to run a one time task, then as the container finish its task, it's normal you could not enter into container anymore.
If you intend to run a daemon task, leave the container service there, you should choose a command which won't finish after run, then your container will still there.
For your case, to quick let you understand it, you could use next to have a quick understanding, use tail -f /dev/null to let the container not exit:
# docker create busybox sh -c "echo hi there; tail -f /dev/null"
840d7c972a96712e48c9aa391aa63638fb10e12307797e338157105bdfb6934e
root#shlava:~# docker start 840d7c972a96712e48c9aa391aa63638fb10e12307797e338157105bdfb6934e
840d7c972a96712e48c9aa391aa63638fb10e12307797e338157105bdfb6934e
root#shlava:~# docker logs 0ae2d689e63a8688213d1eaf285e555ba3d672b8953f0d2730a1897c9d648a26
hi there
root#shlava:~# docker exec -it 0ae2d689e63a8688213d1eaf285e555ba3d672b8953f0d2730a1897c9d648a26 /bin/sh
/ #

Question about docker run command parameters, -t -i

I am confused about these three commands, I don't know the difference among them. Sorry, I am new to docker.
I can not see the difference from the result.Could anybody tell me the difference?
docker run -it IMAGE_NAME /bin/bash
docker run -i IMAGE_NAME /bin/bash
docker run -i IMAGE_NAME
From the docker documentation
-t : Allocate a pseudo-tty
-i : Keep STDIN open even if not attached
For interactive processes (like a shell), you must use -i -t together in order to allocate a tty for the container process.
docker run -i imagename /bin/bash
This will attach a shell to the container. You can run any shell command on the shell.
docker run -i imagename
This will dump the stdout on the terminal. Similar to docker run but with ability to take input from pipe.
Docker run command has some parameters to run command in Detached or Foreground mode.
-i and -t falls under Foreground mode.
-i : Keep STDIN open even if not attached
-t : Allocate a pseudo-tty
In case of -i whenever you run docker container command passed to it will be fired. in your case "/bin/bash"
Note from Doc
For interactive processes (like a shell), you must use -i -t together
in order to allocate a tty for the container process. -i -t is often
written -it as you’ll see in later examples. Specifying -t is
forbidden when the client is receiving its standard input from a pipe,
as in:
More Detail Here
docker run -it IMAGE_NAME /bin/bash --> you will be able to enter into container if you use -i(interactive) option (which is for executing any commands in the container) and -t(tty) which gives you the terminal to enter any command, /bin/bash is the type of linux shell (eg. sh,ksh,bash etc.)

Error "The input device is not a TTY"

I am running the following command from my Jenkinsfile. However, I get the error "The input device is not a TTY".
docker run -v $PWD:/foobar -it cloudfoundry/cflinuxfs2 /foobar/script.sh
Is there a way to run the script from the Jenkinsfile without doing interactive mode?
I basically have a file called script.sh that I would like to run inside the Docker container.
Remove the -it from your cli to make it non interactive and remove the TTY. If you don't need either, e.g. running your command inside of a Jenkins or cron script, you should do this.
Or you can change it to -i if you have input piped into the docker command that doesn't come from a TTY. If you have something like xyz | docker ... or docker ... <input in your command line, do this.
Or you can change it to -t if you want TTY support but don't have it available on the input device. Do this for apps that check for a TTY to enable color formatting of the output in your logs, or for when you later attach to the container with a proper terminal.
Or if you need an interactive terminal and aren't running in a terminal on Linux or MacOS, use a different command line interface. PowerShell is reported to include this support on Windows.
What is a TTY? It's a terminal interface that supports escape sequences, moving the cursor around, etc, that comes from the old days of dumb terminals attached to mainframes. Today it is provided by the Linux command terminals and ssh interfaces. See the wikipedia article for more details.
To see the difference of running a container with and without a TTY, run a container without one: docker run --rm -i ubuntu bash. From inside that container, install vim with apt-get update; apt-get install vim. Note the lack of a prompt. When running vim against a file, try to move the cursor around within the file.
For docker run DON'T USE -it flag
(as said BMitch)
And it's not exactly what you are asking, but would be also useful for others:
For docker-compose exec use -T flag!
The -T key would help people who are using docker-compose exec! (It disable pseudo-tty allocation)
For example:
docker-compose -f /srv/backend_bigdata/local.yml exec -T postgres backup
or
docker-compose exec -T mysql mysql -uuser_name -ppassword database_name < dir/to/db_backup.sql
For those who struggle with this error and git bash on Windows, just use PowerShell where -it works perfectly.
If you are using git bash on windows, you just need to put
winpty
before your 'docker line' :
winpty docker exec -it some_container bash
In order for docker to allocate a TTY (the -t option) you already need to be in a TTY when docker run is called. Jenkins executes its jobs not in a TTY.
Having said that, the script you are running within Jenkins you may also want to run locally. In that case it can be really convenient to have a TTY allocated so you can send signals like ctrl+c when running it locally.
To fix this make your script optionally use the -t option, like so:
test -t 1 && USE_TTY="-t"
docker run ${USE_TTY} ...
when using 'git bash',
1) I execute the command:
docker exec -it 726fe4999627 /bin/bash
I have the error:
the input device is not a TTY. If you are using mintty, try prefixing the command with 'winpty'
2) then, I execute the command:
winpty docker exec -it 726fe4999627 /bin/bash
I have another error:
OCI runtime exec failed: exec failed: container_linux.go:344: starting container process caused "exec: \"D:/Git/usr/bin/
bash.exe\": stat D:/Git/usr/bin/bash.exe: no such file or directory": unknown
3) third, I execute the:
winpty docker exec -it 726fe4999627 bash
it worked.
when I using 'powershell', all worked well.
Using docker-compose exec -T fixed the problem for me via Jenkins
docker-compose exec -T containerName php script.php
Same Case Here, I am running the following command throw .sh script(bash) and python .py
However, I get the same error "The input device is not a TTY".
in my case, I'm trying to take the dump from a running container of my "production" env with authentication and passing with some arguments,
then take the output of .bak file of my mssql database container.
Remove -it from the command. If you want to keep it interactive then keep -i.
you can check my .sh file and a long command taking dump.
if using windows, try with cmd , for me it works. check if docker is started.
My Jenkins pipeline step shown below failed with the same error.
steps {
echo 'Building ...'
sh 'sh ./Tools/build.sh'
}
In my "build.sh" script file "docker run" command output this error when it was executed by Jenkins job. However it was working OK when the script ran in the shell terminal.The error happened because of -t option passed to docker run command that as I know tries to allocate terminal and fails if there is no terminal to allocate.
In my case I have changed the script to pass -t option only if a terminal could be detected. Here is the code after changes :
DOCKER_RUN_OPTIONS="-i --rm"
# Only allocate tty if we detect one
if [ -t 0 ] && [ -t 1 ]; then
DOCKER_RUN_OPTIONS="$DOCKER_RUN_OPTIONS -t"
fi
docker run $DOCKER_RUN_OPTIONS --name my-container-name my-image-tag
I know this is not directly answering the question at hand but for anyone that comes upon this question who is using WSL running Docker for windows and cmder or conemu.
The trick is not to use Docker which is installed on windows at /mnt/c/Program Files/Docker/Docker/resources/bin/docker.exe but rather to install the ubuntu/linux Docker. It's worth pointing out that you can't run Docker itself from within WSL but you can connect to Docker for windows from the linux Docker client.
Install Docker on Linux
sudo apt-get install apt-transport-https ca-certificates curl software-properties-common
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
sudo apt-get update
sudo apt-get install docker-ce
Connect to Docker for windows on the port 2375 which needs to be enabled from the settings in docker for windows.
docker -H localhost:2375 run -it -v /mnt/c/code:/var/app -w "/var/app" centos:7
Or set the docker_host variable which will allow you to omit the -H switch
export DOCKER_HOST=tcp://localhost:2375
You should now be able to connect interactively with a tty terminal session.
In Jenkins, I'm using docker-compose exec -T
eg:-
docker-compose exec -T app php artisan migrate
winpty works as long as you don't specify volumes to be mounted such as .:/mountpoint or ${pwd}:/mountpoint
The best workaround I have found is to use the git-bash plugin inside Visual Code Studio and use the terminal to start and stop containers or docker-compose.
For those using Pyinvoke see this documentation which I'll syndicate here in case the link dies:
99% of the time, adding pty=True to your run call will make things work as you were expecting. Read on for why this is (and why pty=True is not the default).
Command-line programs often change behavior depending on whether a controlling terminal is present; a common example is the use or disuse of colored output. When the recipient of your output is a human at a terminal, you may want to use color, tailor line length to match terminal width, etc.
Conversely, when your output is being sent to another program (shell pipe, CI server, file, etc) color escape codes and other terminal-specific behaviors can result in unwanted garbage.
Invoke’s use cases span both of the above - sometimes you only want data displayed directly, sometimes you only want to capture it as a string; often you want both. Because of this, there is no “correct” default behavior re: use of a pseudo-terminal - some large chunk of use cases will be inconvenienced either way.
For use cases which don’t care, direct invocation without a pseudo-terminal is faster & cleaner, so it is the default.
Instead of using -it use --tty
So your docker run should look like this:
docker run -v $PWD:/foobar --tty cloudfoundry/cflinuxfs2 /foobar/script.sh
use only -i flag than -it flag. which can help you to see what going on inside container.
docker exec -i $USER bash <<EOF
apt install nano -y
EOF
you might see the warning but it shows you output on the terminal inside docker.

How to get docker exec stdout to be as verbose as running command in container?

If I run a command using docker's exec command, like so:
docker exec container gulp
It simply runs the command, but nothing is outputted to my terminal window.
However, if I actually go into the container and run the command manually:
docker exec -ti container bash
gulp
I see gulp's output:
[13:49:57] Using gulpfile ~/code/services/app/gulpfile.js[13:49:57]
Starting 'scripts'...[13:49:57] Starting 'styles'...[13:49:58]
Starting 'emailStyles'... ...
How can I run my first command and still have the output sent to my terminal window?
Side note: I see the same behavior with npm installs, forever restarts, etc. So, it is not just a gulp issue, but likely something with how docker is mapping the stdout.
How can I run my first command and still have the output sent to my terminal window?
You need to make sure docker run is launched with the -t option in order to allocate a pseudo tty.
Then a docker exec without -t would still work.
I discuss docker exec -it here, which references "Fixing the Docker TERM variable issue ")
docker#machine:/c/Users/vonc/prog$ d run --name test -dit busybox
2b06a0ebb573e936c9fa2be7e79f1a7729baee6bfffb4b2cbf36e818b1da7349
docker#machine:/c/Users/vonc/prog$ d exec test echo ok
ok

Docker container will automatically stop after "docker run -d"

According to tutorial I read so far, use "docker run -d" will start a container from image, and the container will run in background. This is how it looks like, we can see we already have container id.
root#docker:/home/root# docker run -d centos
605e3928cdddb844526bab691af51d0c9262e0a1fc3d41de3f59be1a58e1bd1d
But if I ran "docker ps", nothing was returned.
So I tried "docker ps -a", I can see container already exited:
root#docker:/home/root# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
605e3928cddd centos:latest "/bin/bash" 31 minutes ago Exited (0) 31 minutes ago kickass_swartz
Anything I did wrong? How can I troubleshoot this issue?
The centos dockerfile has a default command bash.
That means, when run in background (-d), the shell exits immediately.
Update 2017
More recent versions of docker authorize to run a container both in detached mode and in foreground mode (-t, -i or -it)
In that case, you don't need any additional command and this is enough:
docker run -t -d centos
The bash will wait in the background.
That was initially reported in kalyani-chaudhari's answer and detailed in jersey bean's answer.
vonc#voncvb:~$ d ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
4a50fd9e9189 centos "/bin/bash" 8 seconds ago Up 2 seconds wonderful_wright
Note that for alpine, Marinos An reports in the comments:
docker run -t -d alpine/git does not keep the process up.
Had to do: docker run --entrypoint "/bin/sh" -it alpine/git
Original answer (2015)
As mentioned in this article:
Instead of running with docker run -i -t image your-command, using -d is recommended because you can run your container with just one command and you don’t need to detach terminal of container by hitting Ctrl + P + Q.
However, there is a problem with -d option. Your container immediately stops unless the commands keep running in foreground.
Docker requires your command to keep running in the foreground. Otherwise, it thinks that your applications stops and shutdown the container.
The problem is that some application does not run in the foreground. How can we make it easier?
In this situation, you can add tail -f /dev/null to your command.
By doing this, even if your main command runs in the background, your container doesn’t stop because tail is keep running in the foreground.
So this would work:
docker run -d centos tail -f /dev/null
Or in Dockerfile:
ENTRYPOINT ["tail"]
CMD ["-f","/dev/null"]
A docker ps would show the centos container still running.
From there, you can attach to it or detach from it (or docker exec some commands).
According to this answer, adding the -t flag will prevent the container from exiting when running in the background. You can then use docker exec -i -t <image> /bin/bash to get into a shell prompt.
docker run -t -d <image> <command>
It seems that the -t option isn't documented very well, though the help says that it "allocates a pseudo-TTY."
Background
A Docker container runs a process (the "command" or "entrypoint") that keeps it alive. The container will continue to run as long as the command continues to run.
In your case, the command (/bin/bash, by default, on centos:latest) is exiting immediately (as bash does when it's not connected to a terminal and has nothing to run).
Normally, when you run a container in daemon mode (with -d), the container is running some sort of daemon process (like httpd). In this case, as long as the httpd daemon is running, the container will remain alive.
What you appear to be trying to do is to keep the container alive without a daemon process running inside the container. This is somewhat strange (because the container isn't doing anything useful until you interact with it, perhaps with docker exec), but there are certain cases where it might make sense to do something like this.
(Did you mean to get to a bash prompt inside the container? That's easy! docker run -it centos:latest)
Solution
A simple way to keep a container alive in daemon mode indefinitely is to run sleep infinity as the container's command. This does not rely doing strange things like allocating a TTY in daemon mode. Although it does rely on doing strange things like using sleep as your primary command.
$ docker run -d centos:latest sleep infinity
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d651c7a9e0ad centos:latest "sleep infinity" 2 seconds ago Up 2 seconds nervous_visvesvaraya
Alternative Solution
As indicated by cjsimon, the -t option allocates a "pseudo-tty". This tricks bash into continuing to run indefinitely because it thinks it is connected to an interactive TTY (even though you have no way to interact with that particular TTY if you don't pass -i). Anyway, this should do the trick too:
$ docker run -t -d centos:latest
Not 100% sure whether -t will produce other weird interactions; maybe leave a comment below if it does.
Hi this issue is because docker containers exit if there is no running application in the container.
-d
option is just to run a container in deamon mode.
So the trick to make your container continuously running is point to a shell file in docker which will keep your application running.You can try with a start.sh file
Eg: docker run -d centos sh /yourlocation/start.sh
This start.sh should point to a never ending application.
In case if you dont want any application to be running,you can install monit which will keep your docker container running.
Please let us know if these two cases worked for you to keep your container running.
All the best
You can accomplish what you want with either:
docker run -t -d <image-name>
or
docker run -i -d <image-name>
or
docker run -it -d <image-name>
The command parameter as suggested by other answers (i.e. tail -f /dev/null) is completely optional, and is NOT required to get your container to stay running in the background.
Also note the Docker documentation suggests that combining -i and -t options will cause it to behave like a shell.
See:
https://docs.docker.com/engine/reference/run/#foreground
I have this code snippet run from the ENTRYPOINT in my docker file:
while true
do
echo "Press [CTRL+C] to stop.."
sleep 1
done
Run the built docker image as:
docker run -td <image name>
Log in to the container shell:
docker exec -it <container id> /bin/bash
execute command as follows :
docker run -t -d <image-name>
if you want to specify port then command as below:
docker run -t -d -p <port-no> <image-name>
verify the running container using following command:
docker ps
Docker container exits if task inside is done, so if you want to keep it alive even if it does not have any job or already finished them, you can do docker run -di image. After you do docker container ls you will see it running.
Docker requires your command to keep running in the foreground. Otherwise, it thinks that your applications stops and shutdown the container.
So if your docker entry script is a background process like following:
/usr/local/bin/confd -interval=30 -backend etcd -node $CONFIG_CENTER &
The '&' makes the container stop and exit if there are no other foreground process triggered later.
So the solution is just remove the '&' or have another foreground CMD running after it, such as
tail -f server.log
If you are using CMD at the end of your Dockerfile, what you can do is adding the code at the end. This will only work if your docker is built on ubuntu, or any OS that can use bash.
&& /bin/bash
Briefly the end of your Dockerfile will look like something like this.
...
CMD ls && ... && /bin/bash
So if you have anything running automatically after you run your docker image, and when the task is complete the bash terminal will be active inside your docker. Thereby, you can enter you shell commands.
Maybe it is just me but on CentOS 7.3.1611 and Docker 1.12.6 but I ended up having to use a combination of the answers posted by #VonC & #Christopher Simon to get this working reliably. Nothing I did before this would stop the container from exiting after it ran CMD successfully. I am starting oracle-xe-11Gr2 and sshd.
Dockerfile
...
RUN ssh-keygen -t rsa -f /etc/ssh/ssh_host_rsa_key -N '' && systemctl enable sshd
...
CMD /etc/init.d/oracle-xe start && /sbin/sshd && tail -f /dev/null
Then adding -d -t and -i to run
docker run --shm-size=2g --name oracle-db -d -t -i -p 5022:22 -p 5080:8080 -p 1521:1521 centos-oracle:7.3.1611
Finally after hours of bashing my head against the wall
ssh -v root#127.0.0.1 -p 5022
...
root#127.0.0.1's password:
debug1: Authentication succeeded (password).
For whatever reason the above will exit after executing CMD if the tail -f is removed, or any of the -t -d -i options are omitted.
I had the same issue, just opening another terminal with a bash on it worked for me :
create container:
docker run -d mcr.microsoft.com/mssql/server:2019-CTP3.0-ubuntu
containerid=52bbc9b30557
start container:
docker start 52bbc9b30557
start bash to keep container running:
docker exec -it 52bbc9b30557 bash
start process you need:
docker exec -it 52bbc9b30557 /path_to_cool_your_app
Running docker with interactive mode might solve the issue.
Here is the example for running image with and without interactive mode
chaitra#RSK-IND-BLR-L06:~/dockers$ sudo docker run -d -t -i test_again1.0
b6b9a942a79b1243bada59db19c7999cfff52d0a8744542fa843c95354966a18
chaitra#RSK-IND-BLR-L06:~/dockers$ sudo docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
chaitra#RSK-IND-BLR-L06:~/dockers$ sudo docker run -d -t -i test_again1.0 bash
c3d6a9529fd70c5b2dc2d7e90fe662d19c6dad8549e9c812fb2b7ce2105d7ff5
chaitra#RSK-IND-BLR-L06:~/dockers$ sudo docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
c3d6a9529fd7 test_again1.0 "bash" 2 seconds ago Up 1 second awesome_haibt
You can simply use:
docker container run -d -it <container name or id> /bin/bash
I have explained it in the following post that has the same question.
How to retain docker alpine container after "exit" is used?
I was also facing the same problem but in a different manner. When I create the docker containers. it automatically stops the unused containers which are just running in the background. Sometimes it also stops the containers that are in the use.
In my situation, this is because of the permission of the docker.sock files it earlier has.
what you have to do is :-
Install docker again.(As i work on ubuntu i install it from here)
Run the command to change the permissions.
sudo chmod 666 /var/run/docker.sock
Install docker-compose (this is optional as I have compose file to create many containers together)
sudo curl -L "https://github.com/docker/compose/releases/download/1.26.0/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose
check for the version to ensure that I have the latest one and not get problem with some deprications.
Then I run the docker container build.
Argument order matters
Jersey Beans answer (all 3 examples) worked for me. After quite a bit of trial and error I realized that the order of the arguments matter.
Keeps the container running in the background:
docker run -t -d <image-name>
Keeps the container running in the foreground: docker run <image-name> -t -d
It wasn't obvious to me coming from a Powershell background.
if you want to operate on the container, you need to run it in foreground to keep it alive.
There are multiple options out there to run the container in foreground/detached state. But if you still feel the issue is not resolved, you can try troubleshooting the issue by viewing the logs.
sudo docker logs -f >> container.log
additionally you can also use --details to show extra details provided to logs.
Incorrect Path to App in Dockerfile:
I was migrating an application from a RHEL server to a Docker container using Alpine Linux.
No errors during the build, so I was surprised to see the container immediately exit!
First port of call:
docker logs <containerID>
This revealed the path of the binary I had supplied to CMD in the Dockerfile was bogus:
line 0: /sbin/postfix: not found
Well that told me how things were broken, but not specifically where: I still required the correct path for the binary in Alpine Linux...
Troubleshooting:
Googling didn't reveal the correct path to it, so I added the following line to my Dockerfile:
RUN which postfix
I then reviewed my build logging- provided by the below command appended to my build command- to retrieve the value of RUN which postfix
--progress=plain > /path/to/build.log 2>&1
The Fix:
I deleted this test build, supplied the correct path- /usr/sbin/postfix - to CMD in the Dockerfile, deleted RUN which postfix and ran another build.
Voila; the process now remained up.
So a duff path was causing the container to immediately exit...
These 4 commands all work to keep your docker container running:
docker run -td centos
docker run -dt centos
docker run -t -d centos
docker run -d -t centos
Firstly, You need to check if any container is running
Type command,
docker ps -all
If any container is running then stop them
Type command,
docker stop Container Id
Now, Finally run the docker by using below command..........
docker run -t -p 2020:3000 dockerImageName
Hence, Open your google chrome and visit on localhost:2020
Congrats :)

Resources