Connection to tcp://localhost:8554?timeout=0 failed: Cannot assign requested address - docker

I have two docker containers. The first one I run using this command:
docker run -d --network onprem_network --name rtsp_simple_server --rm -t -e RTSP_PROTOCOLS=tcp -p 8554:8554 aler9/rtsp-simple-server
The second docker is created from these files:
Dockerfile:
FROM python:slim-buster
ENV PYTHONUNBUFFERED 1
ENV PYTHONDONTWRITEBYTECODE 1
WORKDIR /code
COPY rtsp_streaming.py /code/
COPY ConsoleCapture_clipped.mp4 /code
RUN apt update && apt-get update && apt install ffmpeg -y # && apt-get install ffmpeg libsm6 libxext6 -y
CMD ["python", "/code/rtsp_streaming.py"]
rtsp_streaming.py:
import os
os.system("ffmpeg -re -stream_loop 0 -i ConsoleCapture_clipped.mp4 -c copy -f rtsp rtsp://localhost:8554/mystream")
I run the second docker container like so:
docker run --network onprem_network -v ${data_folder}:/code/Philips_MR --name rtsp_streaming -d rtsp_streaming
docker ps -a yields:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
48ea091b870d rtsp_streaming "python /code/rtsp_s…" 18 minutes ago Exited (0) 18 minutes ago rtsp_streaming
5376e070f89f aler9/rtsp-simple-server "/rtsp-simple-server" 19 minutes ago Up 19 minutes 0.0.0.0:8554->8554/tcp rtsp_simple_server
The second container exits quickly with this error:
Connection to tcp://localhost:8554?timeout=0 failed: Cannot assign
requested address
Any suggestions how to fix this?

You should use rtsp_simple_server:8554 instead of localhost.
Since in the container called rtsp_streaming, localhost means rtsp_streaming and in rtsp_simple_server, localhost means rtsp_simple_server`. So you should use the container's name.

Related

Container in docker is getting started but couldnt be found in "docker ps" command

I have created a 2 containers in docker. However, one of them is visible and other is not.
Context:
I have created 1 container by downloading the docker jenkins image file and that is up and running and can be seen using docker ps command.
Then, I have tried to create an Image file to be consumed by the second container.
The script I have used in VI to create image file:
FROM centos
RUN yum -y install openssh-server
RUN yum install -y passwd
RUN useradd remote_user && \
echo "1234" | passwd remote_user --stdin && \
mkdir /home/remote_user/.ssh && \
chmod 700 /home/remote_user/.ssh
COPY remote-key.pub /home/remote_user/.ssh/authorized_keys
RUN chown remote_user:remote_user -R /home/remote_user/.ssh/ && \
chmod 600 /home/remote_user/.ssh/authorized_keys
CMD /usr/sbin/sshd -D
The script ran successfully as "docker-compose build" has successfully build the image from the script.
Once it was successfully built, I tried to start the it using:
[jenkins#localhost jenkins-data]$ docker-compose up -d
jenkins is up-to-date
Starting remote-host ... done
Post this, when i am doing :
[jenkins#localhost jenkins-data]$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
5c1ee0507091 jenkins/jenkins "/sbin/tini -- /usr/…" 5 days ago Up 5 minutes 0.0.0.0:8080->8080/tcp, 50000/tcp jenkins
Its only showing me one container running while the remote-host container is not visible.
Any way to ensure if the remote-host container is actually running or is there any issue?
New to docker and jenkins, any lead is highly appreciated. thank you.
docker ps only shows running containers.
Using docker ps -a you see both running and stopped containers.
See Docker documentation about ps.
Probably the remote-host container is not running any more?
Container stopped because of the main process which launched by the CMD command detached and become daemon
the main process should be attached to the terminal so you have to remove -D from CMD command CMD /usr/sbin/sshd -D or you can follow this approach
run sshd in detached mode and use while sleep to keep the container
running

docker, mariadb doesn't start at "init", based in debian:stable

i am trying write a Dockerfile like that
FROM debian:stable
RUN apt-get update
RUN apt-get install -y mariadb-server
EXPOSE 3306
CMD ["mysqld"]
I create the image with
docker build -t debian1 .
And i create the container with
docker run -d --name my_container_debian -i -t debian1
20 seconds after, docker ps -a tells that container is exited. Why? I want the container is up and mariadb running. Thanks. Sorry for the question.
mysqld alone would exit too soon.
If you look at a MySQL server Dockerfile, you will note its ENTRYPOINT is a script docker-entrypoint.sh which will exec mysqld in foreground.
exec "$#"

amc, aerospike are not recognized inside docker container

I've docker ubuntu 16.04 image and I'm running aerospike server in it.
$ docker run -d -p 3000:3000 -p 3001:3001 -p 3002:3002 -p 3003:3003 -p 8081:8081 --name aerospike aerospike/aerospike-server
The docker container is running successfully.
$ docker ps
CONTAINER ID IMAGE COMMAND
CREATED STATUS PORTS
NAMES
b0b4c63d7e22 aerospike/aerospike-server "/entrypoint.sh asd"
36 seconds ago Up 35 seconds 0.0.0.0:3000-3003->3000-3003/tcp, 0.0.0.0:8081->8081/tcp aerospike
I've logged into the docker container
$ docker exec -it b0b4c63d7e22 bash
root#b0b4c63d7e22:/#
I have listed the directories -
root#b0b4c63d7e22:/# ls
bin boot core dev entrypoint.sh etc home lib lib64 media mnt opt
proc root run sbin srv sys tmp usr var
root#b0b4c63d7e22:/#
I changed the directory to bin folder and listed the commands
root#b0b4c63d7e22:/# cd bin
root#b0b4c63d7e22:/bin# ls
bash dnsdomainname ip mount readlink systemctl
touch zegrep
cat domainname journalctl mountpoint rm systemd
true zfgrep
chgrp echo kill mv rmdir systemd-ask-
password umount zforce
chmod egrep ln netstat run-parts systemd-escape
uname zgrep
chown false login networkctl sed systemd-inhibit
uncompress zless
cp fgrep loginctl nisdomainname sh systemd-machine-
id-setup vdir zmore
dash findmnt ls pidof sh.distrib systemd-notify
wdctl znew
date grep lsblk ping sleep systemd-tmpfiles
which
dd gunzip mkdir ping6 ss systemd-tty-ask-
password-agent ypdomainname
df gzexe mknod ps stty tailf
zcat
dir gzip mktemp pwd su tar
zcmp
dmesg hostname more rbash sync tempfile
zdiff
Then I want to check the service -
root#b0b4c63d7e22:/bin# service amc status
amc: unrecognized service
Aerospike's official docker container does not have Aerospike Server running as a daemon, but instead as a foreground process. You can see this in the official github DOCKERFILE.
AMC is not part of Aerospike's Docker Image. It is up to you to run AMC from the environment of your choosing.
Finally, since you have not created a custom aerospike.conf file, Aerospike Server will only respond to clients on the Docker internal network. The -p parameters are not sufficient in itself to expose Aerospike's ports to clients, you'd also need to configure access-address, if you'd want client access from outside of the docker environment. Read more about Aerospike's networking at: https://www.aerospike.com/docs/operations/configure/network/general
You can build your own Docker container for amc to connect to aerospike running on containers.
Here is a sample Dockerfile for AMC.
cat Dockerfile
FROM ubuntu:xenial
ENV AMC_VERSION 4.0.13
# Install AMC server
RUN \
apt-get update -y \
&& apt-get install -y wget python python-argparse python-bcrypt python-openssl logrotate net-tools iproute2 iputils-ping \
&& wget "https://www.aerospike.com/artifacts/aerospike-amc-community/${AMC_VERSION}/aerospike-amc-community-${AMC_VERSION}_amd64.deb" -O aerospike-amc.deb \
&& dpkg -i aerospike-amc.deb \
&& apt-get purge -y
# Expose Aerospike ports
#
# 8081 – amc port
#
EXPOSE 8081
# Execute the run script in foreground mode
ENTRYPOINT ["/opt/amc/amc"]
CMD [" -config-file=/etc/amc/amc.conf -config-dir=/etc/amc"]
#/opt/amc/amc -config-file=/etc/amc/amc.conf -config-dir=/etc/amc
# Docker build sample:
# docker build -t amctest .
# Docker run sample for running amc on port 8081
# docker run -tid --name amc -p 8081:8081 amctest
# and access through http://127.0.0.1:8081
Then you can build the image:
docker build -t amctest .
Sending build context to Docker daemon 50.69kB
Step 1/6 : FROM ubuntu:xenial
---> 2fa927b5cdd3
Step 2/6 : ENV AMC_VERSION 4.0.13
---> Using cache
---> edd6bddfe7ad
Step 3/6 : RUN apt-get update -y && apt-get install -y wget python python-argparse python-bcrypt python-openssl logrotate net-tools iproute2 iputils-ping && wget "https://www.aerospike.com/artifacts/aerospike-amc-community/${AMC_VERSION}/aerospike-amc-community-${AMC_VERSION}_amd64.deb" -O aerospike-amc.deb && dpkg -i aerospike-amc.deb && apt-get purge -y
---> Using cache
---> f916199044d8
Step 4/6 : EXPOSE 8081
---> Using cache
---> 06f7888c1721
Step 5/6 : ENTRYPOINT /opt/amc/amc
---> Using cache
---> bc39346cd94f
Step 6/6 : CMD -config-file=/etc/amc/amc.conf -config-dir=/etc/amc
---> Using cache
---> 8ae4300e7c7c
Successfully built 8ae4300e7c7c
Successfully tagged amctest:latest
and finally run it with port forwarding to port 8081:
docker run -tid --name amc -p 8081:8081 amctest
a07cdd8bf8cec6ba41ce068c01544920136a6905e7a05e9a2c315605f62edfce

Running IBM DOORS in a docker container

I've managed to install IBM DOORS 9 in a container and this is my Dockerfile:
FROM rational/doors:2.2
ENV REFRESHED_AT 27-03-2017
RUN yum install -y -q libstdc++.so.6 libuuid.so.1
RUN export DOORSHOME=/ibm/rational/doors/9.6.1.6/DOORS_Database_Server
RUN export SERVERDATA=/ibm/rationa/doors/data
RUN export PATH=$DOORSHOME/bin:$PATH
RUN export PORTNUMBER=36677
RUN export DOORSHOME SERVERDATA PATH PORTNUMBER DOORSDATA
EXPOSE 36677
WORKDIR /ibm/rational/doors/9.6.1.6/DOORS_Database_Server/bin
ENTRYPOINT ./doorsd
However, if I try to run it
docker run -d -p 36677 rational/doors:2.4
it exists after a couple of seconds
docker ps -l
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
4a0b77173531 rational/doors:2.4 "/bin/bash" 7 seconds ago Exited (0) 6 seconds ago festive_lovelace
What am I missing here? If I run the container manually everything works perfectly.

Why does docker "--filter ancestor=imageName" find the wrong container?

I have a deployment script that builds new images, stop the existing containers with the same image names, then starts new containers from those images.
I stop the container by image name using the answer here: Stopping docker containers by image name - Ubuntu
But this command stops containers that don't have the specified image name. What am I doing wrong?
See here to watch docker stopping the wrong container:
Here is the dockerfile:
FROM ubuntu:14.04
MAINTAINER j#eka.com
# Settings
ENV NODE_VERSION 5.11.0
ENV NVM_DIR /root/.nvm
ENV NODE_PATH $NVM_DIR/versions/node/v$NODE_VERSION/lib/node_modules
ENV PATH $NVM_DIR/versions/node/v$NODE_VERSION/bin:$PATH
# Replace shell with bash so we can source files
RUN rm /bin/sh && ln -s /bin/bash /bin/sh
# Install libs
RUN apt-get update
RUN apt-get install curl -y
RUN curl https://raw.githubusercontent.com/creationix/nvm/v0.31.0/install.sh | bash \
&& chmod +x $NVM_DIR/nvm.sh \
&& source $NVM_DIR/nvm.sh \
&& nvm install $NODE_VERSION \
&& nvm alias default $NODE_VERSION \
&& nvm use default
RUN apt-get clean
# Install app
RUN mkdir /app
COPY ./app /app
#Run the app
CMD ["node", "/app/src/app.js"]
I build like so:
docker build -t "$serverImageName" .
and start like so:
docker run -d -p "3000:"3000" -e db_name="$db_name" -e db_username="$db_username" -e db_password="$db_password" -e db_host="$db_host" "$serverImageName"
Why not use the container name to differentiate you environments?
docker run -d --rm --name nginx-dev nginx
40ca9a6db09afd78e8e76e690898ed6ba2b656f777b84e7462f4af8cb4a0b17d
docker run -d --rm --name nginx-qa nginx
347b32c85547d845032cbfa67bbba64db8629798d862ed692972f999a5ff1b6b
docker run -d --rm --name nginx nginx
3bd84b6057b8d5480082939215fed304e65eeac474b2ca12acedeca525117c36
Then use docker ps
docker ps -f name=nginx$
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
3bd84b6057b8 nginx "nginx -g 'daemon ..." 30 seconds ago Up 28 seconds 80/tcp, 443/tcp nginx
According to the docs --filter ancestor could be finding the wrong containers if they are in any way children of other containers.
So to be sure my images are separate right from the start I added this line to the start of my dockerfile, after the FROM and MAINTAINER commands:
RUN echo DEVTESTLIVE: This line ensures that this container will never be confused as an ancestor of another environment
Then in my build scripts after copying the dockerfile to the distribution folder I replace DEVTESTLIVE with the appropriate environment:
sed -i -e "s/DEVTESTLIVE/$env/g" ../dist/server/dockerfile
This seems to have worked; I now have containers for all three environments running simultaneously and can start and stop them automatically through their image names.

Resources