Running IBM DOORS in a docker container - docker

I've managed to install IBM DOORS 9 in a container and this is my Dockerfile:
FROM rational/doors:2.2
ENV REFRESHED_AT 27-03-2017
RUN yum install -y -q libstdc++.so.6 libuuid.so.1
RUN export DOORSHOME=/ibm/rational/doors/9.6.1.6/DOORS_Database_Server
RUN export SERVERDATA=/ibm/rationa/doors/data
RUN export PATH=$DOORSHOME/bin:$PATH
RUN export PORTNUMBER=36677
RUN export DOORSHOME SERVERDATA PATH PORTNUMBER DOORSDATA
EXPOSE 36677
WORKDIR /ibm/rational/doors/9.6.1.6/DOORS_Database_Server/bin
ENTRYPOINT ./doorsd
However, if I try to run it
docker run -d -p 36677 rational/doors:2.4
it exists after a couple of seconds
docker ps -l
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
4a0b77173531 rational/doors:2.4 "/bin/bash" 7 seconds ago Exited (0) 6 seconds ago festive_lovelace
What am I missing here? If I run the container manually everything works perfectly.

Related

Connection to tcp://localhost:8554?timeout=0 failed: Cannot assign requested address

I have two docker containers. The first one I run using this command:
docker run -d --network onprem_network --name rtsp_simple_server --rm -t -e RTSP_PROTOCOLS=tcp -p 8554:8554 aler9/rtsp-simple-server
The second docker is created from these files:
Dockerfile:
FROM python:slim-buster
ENV PYTHONUNBUFFERED 1
ENV PYTHONDONTWRITEBYTECODE 1
WORKDIR /code
COPY rtsp_streaming.py /code/
COPY ConsoleCapture_clipped.mp4 /code
RUN apt update && apt-get update && apt install ffmpeg -y # && apt-get install ffmpeg libsm6 libxext6 -y
CMD ["python", "/code/rtsp_streaming.py"]
rtsp_streaming.py:
import os
os.system("ffmpeg -re -stream_loop 0 -i ConsoleCapture_clipped.mp4 -c copy -f rtsp rtsp://localhost:8554/mystream")
I run the second docker container like so:
docker run --network onprem_network -v ${data_folder}:/code/Philips_MR --name rtsp_streaming -d rtsp_streaming
docker ps -a yields:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
48ea091b870d rtsp_streaming "python /code/rtsp_s…" 18 minutes ago Exited (0) 18 minutes ago rtsp_streaming
5376e070f89f aler9/rtsp-simple-server "/rtsp-simple-server" 19 minutes ago Up 19 minutes 0.0.0.0:8554->8554/tcp rtsp_simple_server
The second container exits quickly with this error:
Connection to tcp://localhost:8554?timeout=0 failed: Cannot assign
requested address
Any suggestions how to fix this?
You should use rtsp_simple_server:8554 instead of localhost.
Since in the container called rtsp_streaming, localhost means rtsp_streaming and in rtsp_simple_server, localhost means rtsp_simple_server`. So you should use the container's name.

I can't get access to an exposed port in docker

I'm using Ubuntu 20.04 and running Python 3.8. Here is my dockerfile:
FROM python:3.8
WORKDIR /usr/src/flog/
COPY requirements/ requirements/
RUN pip install -r requirements/dev.txt
RUN pip install gunicorn
COPY flog/ flog/
COPY migrations/ migrations/
COPY wsgi.py ./
COPY docker_boot.sh ./
RUN chmod +x docker_boot.sh
ENV FLASK_APP wsgi.py
EXPOSE 5000
ENTRYPOINT ["./docker_boot.sh"]
and my docker_boot.sh
#! /bin/sh
flask deploy
flask create-admin
flask forge
exec gunicorn -b 0.0.0.0:5000 --access-logfile - --error-logfile - wsgi:app
I ran docker run flog -d -p 5000:5000 in my terminal. And I couldn't get my app working by typing localhost:5000 but it worked quite well when I typed 172.17.0.2:5000 (the docker machine's ip address). But I want the app to run on localhost:5000.
I'm sure there is nothing wrong with the requirements/dev.txt and the code because it works well when I run flask run directly in my terminal.
Edit on 2021.3.16:
Add docker ps information when docker run flog -d -p 5000:5000 is running:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ff048e904183 flog "./docker_boot.sh -d…" 8 seconds ago Up 6 seconds 5000/tcp inspiring_kalam
It is strange that there's no mapping of the hosts. I'm sure the firewall is off.
Can anyone help me? Thanks.
Use docker run -d -p 0.0.0.0:5000:5000 flog.
The arguments and the flags that are after the image name are passed as arguments to the entrypoint of the container created from that image.
Run docker ps and you need to see something like
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
565a97468fc7 flog "docker_boot.sh" 1 minute ago Up 1 minutes 0.0.0.0:5000->5000/tcp xxxxxxxx_xxxxxxx

Container in docker is getting started but couldnt be found in "docker ps" command

I have created a 2 containers in docker. However, one of them is visible and other is not.
Context:
I have created 1 container by downloading the docker jenkins image file and that is up and running and can be seen using docker ps command.
Then, I have tried to create an Image file to be consumed by the second container.
The script I have used in VI to create image file:
FROM centos
RUN yum -y install openssh-server
RUN yum install -y passwd
RUN useradd remote_user && \
echo "1234" | passwd remote_user --stdin && \
mkdir /home/remote_user/.ssh && \
chmod 700 /home/remote_user/.ssh
COPY remote-key.pub /home/remote_user/.ssh/authorized_keys
RUN chown remote_user:remote_user -R /home/remote_user/.ssh/ && \
chmod 600 /home/remote_user/.ssh/authorized_keys
CMD /usr/sbin/sshd -D
The script ran successfully as "docker-compose build" has successfully build the image from the script.
Once it was successfully built, I tried to start the it using:
[jenkins#localhost jenkins-data]$ docker-compose up -d
jenkins is up-to-date
Starting remote-host ... done
Post this, when i am doing :
[jenkins#localhost jenkins-data]$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
5c1ee0507091 jenkins/jenkins "/sbin/tini -- /usr/…" 5 days ago Up 5 minutes 0.0.0.0:8080->8080/tcp, 50000/tcp jenkins
Its only showing me one container running while the remote-host container is not visible.
Any way to ensure if the remote-host container is actually running or is there any issue?
New to docker and jenkins, any lead is highly appreciated. thank you.
docker ps only shows running containers.
Using docker ps -a you see both running and stopped containers.
See Docker documentation about ps.
Probably the remote-host container is not running any more?
Container stopped because of the main process which launched by the CMD command detached and become daemon
the main process should be attached to the terminal so you have to remove -D from CMD command CMD /usr/sbin/sshd -D or you can follow this approach
run sshd in detached mode and use while sleep to keep the container
running

Simple Dockerfile no work

This is my Dockerfile:
FROM debian:stable
MAINTAINER xxxx <xxxx#xxxx.com>
RUN apt-get update && apt-get upgrade -y
CMD ["/bin/bash"]
Then, I run in the directory of Dockerfile:
docker build -t testimage .
Finally:
docker run -d testimage
The container no start:
docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
c4fe93e2e225 test "/bin/bash" 17 minutes ago Exited (0) 9 minutes ago gloomy_ritchie
You are trying to run a detached container (-d), but you are also attempting to launch an interactive shell (/bin/bash). Because bash requires an interactive terminal, it exits immediately, hence your container exits.
If you just want to run an interactive shell in your container, get rid of the -d:
docker run -it testimage
The -it flags set up the container for interactive use; see the man page for docker-run for more information.
A detached container is most often used to run a persistent service (like a database, or a web server), although you can run anything as long as it doesn't expect to be attached to an active terminal.

Docker-compose up does not start a container

Dockerfile:
FROM shawnzhu/ruby-nodejs:0.12.7
RUN \
apt-get install git \
&& npm install -g bower gulp grunt \
gem install sass
RUN useradd -ms /bin/bash devel
# Deal with ssh
COPY ssh_keys/id_rsa /devel/.ssh/id_rsa
COPY ssh_keys/id_rsa.pub /devel/.ssh/id_rsa.pub
RUN echo "IdentityFile /devel/.ssh/id_rsa" > /devel/.ssh/config
# set root password
RUN echo 'root:password' | chpasswd
# Add gitconfig
COPY .gitconfig /devel/.gitconfig
USER devel
WORKDIR /var/www/
EXPOSE 80
docker-compose.yml file:
nodejs:
build: .
ports:
- "8001:80"
- "3000:3000"
volumes:
- ~/Web/docker/nodejs/www:/var/www
Commands:
$ docker-compose build nodejs
$ docker images
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
nodejs_nodejs latest aece5fb27134 2 minutes ago 596.5 MB
shawnzhu/ruby-nodejs 0.12.7 bbd5b568b88f 5 months ago 547.5 MB
$ docker-compose up -d nodejs
Creating nodejs_nodejs_1
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
$ docker ps --all
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
c24c6d0e756b nodejs_nodejs "/bin/bash" About a minute ago Exited (0) About a minute ago nodejs_nodejs_1
As you can see the docker-compose up -d should have created a container and run it on the background, but it didn't. Instead it exited with code 0.
If your Dockerfile doesn't do anything (for example a webserver to listen on port 80), it will be discarded as soon as it finishes running the instructions. Because Docker containers should be "ephemeral".
If you just want to start a container and interact with it via terminal, don't use docker-compose up -d, use the following instead:
docker run -it --entrypoint=/bin/bash [your_image_id]
This will start your container and run /bin/bash, the -it helps you keep the terminal session to interact with the container. When you are done doing your works, press Ctrl-D to exit.
I had a similar problem with SQL Server 2017 container exiting soon after it was created. The process running inside the container should be long running, otherwise Docker will exit the container. In the docker-compose scenario I implemented the tty:true approach which is documented here https://www.handsonarchitect.com/2018/01/docker-compose-tip-how-to-avoid-sql.html

Resources