amc, aerospike are not recognized inside docker container - docker

I've docker ubuntu 16.04 image and I'm running aerospike server in it.
$ docker run -d -p 3000:3000 -p 3001:3001 -p 3002:3002 -p 3003:3003 -p 8081:8081 --name aerospike aerospike/aerospike-server
The docker container is running successfully.
$ docker ps
CONTAINER ID IMAGE COMMAND
CREATED STATUS PORTS
NAMES
b0b4c63d7e22 aerospike/aerospike-server "/entrypoint.sh asd"
36 seconds ago Up 35 seconds 0.0.0.0:3000-3003->3000-3003/tcp, 0.0.0.0:8081->8081/tcp aerospike
I've logged into the docker container
$ docker exec -it b0b4c63d7e22 bash
root#b0b4c63d7e22:/#
I have listed the directories -
root#b0b4c63d7e22:/# ls
bin boot core dev entrypoint.sh etc home lib lib64 media mnt opt
proc root run sbin srv sys tmp usr var
root#b0b4c63d7e22:/#
I changed the directory to bin folder and listed the commands
root#b0b4c63d7e22:/# cd bin
root#b0b4c63d7e22:/bin# ls
bash dnsdomainname ip mount readlink systemctl
touch zegrep
cat domainname journalctl mountpoint rm systemd
true zfgrep
chgrp echo kill mv rmdir systemd-ask-
password umount zforce
chmod egrep ln netstat run-parts systemd-escape
uname zgrep
chown false login networkctl sed systemd-inhibit
uncompress zless
cp fgrep loginctl nisdomainname sh systemd-machine-
id-setup vdir zmore
dash findmnt ls pidof sh.distrib systemd-notify
wdctl znew
date grep lsblk ping sleep systemd-tmpfiles
which
dd gunzip mkdir ping6 ss systemd-tty-ask-
password-agent ypdomainname
df gzexe mknod ps stty tailf
zcat
dir gzip mktemp pwd su tar
zcmp
dmesg hostname more rbash sync tempfile
zdiff
Then I want to check the service -
root#b0b4c63d7e22:/bin# service amc status
amc: unrecognized service

Aerospike's official docker container does not have Aerospike Server running as a daemon, but instead as a foreground process. You can see this in the official github DOCKERFILE.
AMC is not part of Aerospike's Docker Image. It is up to you to run AMC from the environment of your choosing.
Finally, since you have not created a custom aerospike.conf file, Aerospike Server will only respond to clients on the Docker internal network. The -p parameters are not sufficient in itself to expose Aerospike's ports to clients, you'd also need to configure access-address, if you'd want client access from outside of the docker environment. Read more about Aerospike's networking at: https://www.aerospike.com/docs/operations/configure/network/general

You can build your own Docker container for amc to connect to aerospike running on containers.
Here is a sample Dockerfile for AMC.
cat Dockerfile
FROM ubuntu:xenial
ENV AMC_VERSION 4.0.13
# Install AMC server
RUN \
apt-get update -y \
&& apt-get install -y wget python python-argparse python-bcrypt python-openssl logrotate net-tools iproute2 iputils-ping \
&& wget "https://www.aerospike.com/artifacts/aerospike-amc-community/${AMC_VERSION}/aerospike-amc-community-${AMC_VERSION}_amd64.deb" -O aerospike-amc.deb \
&& dpkg -i aerospike-amc.deb \
&& apt-get purge -y
# Expose Aerospike ports
#
# 8081 – amc port
#
EXPOSE 8081
# Execute the run script in foreground mode
ENTRYPOINT ["/opt/amc/amc"]
CMD [" -config-file=/etc/amc/amc.conf -config-dir=/etc/amc"]
#/opt/amc/amc -config-file=/etc/amc/amc.conf -config-dir=/etc/amc
# Docker build sample:
# docker build -t amctest .
# Docker run sample for running amc on port 8081
# docker run -tid --name amc -p 8081:8081 amctest
# and access through http://127.0.0.1:8081
Then you can build the image:
docker build -t amctest .
Sending build context to Docker daemon 50.69kB
Step 1/6 : FROM ubuntu:xenial
---> 2fa927b5cdd3
Step 2/6 : ENV AMC_VERSION 4.0.13
---> Using cache
---> edd6bddfe7ad
Step 3/6 : RUN apt-get update -y && apt-get install -y wget python python-argparse python-bcrypt python-openssl logrotate net-tools iproute2 iputils-ping && wget "https://www.aerospike.com/artifacts/aerospike-amc-community/${AMC_VERSION}/aerospike-amc-community-${AMC_VERSION}_amd64.deb" -O aerospike-amc.deb && dpkg -i aerospike-amc.deb && apt-get purge -y
---> Using cache
---> f916199044d8
Step 4/6 : EXPOSE 8081
---> Using cache
---> 06f7888c1721
Step 5/6 : ENTRYPOINT /opt/amc/amc
---> Using cache
---> bc39346cd94f
Step 6/6 : CMD -config-file=/etc/amc/amc.conf -config-dir=/etc/amc
---> Using cache
---> 8ae4300e7c7c
Successfully built 8ae4300e7c7c
Successfully tagged amctest:latest
and finally run it with port forwarding to port 8081:
docker run -tid --name amc -p 8081:8081 amctest
a07cdd8bf8cec6ba41ce068c01544920136a6905e7a05e9a2c315605f62edfce

Related

TCP/Telnet from inside docker container

I have a 'mcr.microsoft.com/dotnet/aspnet:5.0-buster-slim' Docker container, with 2 things on it:
-a custom SDK running on it
-a dotnet5 application that connects to the SDK with TCP
If I connect the the Docker's bash, I can use telnet localhost 54321 to connect to my SDK successfully
If I run the Windows SDK version on my development computer (Windows), and I run my application IIS Express instead of Docker, I can successfully connect with a telnet library (host 'localhost', port '54321'), this works
However, I want to run both the SDK and my dotnet application in a Docker container, and when I try to connect from inside the container (the same thing that works on the IIS hosted version), this does not work. By running 'telnet localhost 54321' from the docker container commandline I can confirm that the SDK is running. What am I doing wrong?
Dockerfile:
FROM mcr.microsoft.com/dotnet/aspnet:5.0-buster-slim AS base
RUN apt-get update && apt-get install -y telnet
RUN apt-get update && apt-get install -y libssl1.1
RUN apt-get update && apt-get install -y libpulse0
RUN apt-get update && apt-get install -y libasound2
RUN apt-get update && apt-get install -y libicu63
RUN apt-get update && apt-get install -y libpcre2-16-0
RUN apt-get update && apt-get install -y libdouble-conversion1
RUN apt-get update && apt-get install -y libglib2.0-0
RUN mkdir /sdk
COPY ["Server/Sdk/SomeSDK*", "sdk/"]
WORKDIR /app
EXPOSE 80
EXPOSE 443
FROM mcr.microsoft.com/dotnet/sdk:5.0 AS build
WORKDIR /src
COPY ["Server/MyProject.Server.csproj", "Server/"]
COPY ["Shared/MyProject.Shared.csproj", "Shared/"]
COPY ["Client/MyProject.Client.csproj", "Client/"]
RUN dotnet restore "Server/MyProject.Server.csproj"
COPY . .
WORKDIR "/src/Server"
RUN dotnet build "MyProject.Server.csproj" -c Release -o /app/build
FROM build AS publish
RUN dotnet publish "MyProject.Server.csproj" -c Release -o /app/publish
FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "MyProject.Server.dll"]
Code (working on IIS, when SDK is running on Windows, but not when both SDK and code are running inside the same Docker container):
var telnetClient = new TelnetClient("localhost", 54321, TimeSpan.FromSeconds(1), new CancellationToken());
await telnetClient.Connect();
Thread.Sleep(2000);
await telnetClient.Send("init");
Command line (working BOTH from windows CLI, and from Docker bash (so also when the code is not working, this is working):
$telnet localhost 54321
$init
The issue might be (but I'm not sure, as I've received this result from using direct command line '$telnet localhost 54321' from within dotnet: 'telnet: connection refused by remote host'
Make sure you use in your docker run command also --network=host to use the same network if you want to reach out of the container or create a bridge with --network=bridge (for example with another container).
By default the Docker container is spawn on a separate, dedicated and private subnet on your machine (mostly 172.17.0.0/16) which is different from your machine's default/local subnet (127.0.0.0/8).
For connecting into the host's subnet in this case 127.0.0.0/8 you need --network=host. For communication within the same container though it's not necessary and works out of the box.
For accessing the service in container from the outside you need to make sure you have your application's port published either with --publish HOST_PORT:DOCKER_PORT or --publish-all (random port(s) on host are then assigned, check with docker ps).
Host to container (normal)
# host
telnet <container's IP> 8000 # connects, typing + return shows in netcat stdout
# container
docker run --rm -it --publish 8000:8000 alpine nc -v -l -p 8000
Container to host (normal)
# host
nc -v -l -p 8000
# container, docker run -it alpine
apk add busybox-extras
telnet localhost 8000 # hangs, is dead
Container to host (on host network)
# host
nc -v -l -p 8000
# container, docker run -it --network=host alpine
apk add busybox-extras
telnet localhost 8000 # connects, typing + return shows in netcat stdout
Within container
# start container
docker run -it --name alpine alpine
apk add busybox-extras
# exec into container to create service on 4000
docker exec -it alpine nc -v -l -p 4000
# exec into container to create service on 5000
docker exec -it alpine nc -v -l -p 5000
# telneting from the original terminal (the one with apk add)
telnet localhost 4000 # connects, works
telnet localhost 5000 # connects, works

Connection to tcp://localhost:8554?timeout=0 failed: Cannot assign requested address

I have two docker containers. The first one I run using this command:
docker run -d --network onprem_network --name rtsp_simple_server --rm -t -e RTSP_PROTOCOLS=tcp -p 8554:8554 aler9/rtsp-simple-server
The second docker is created from these files:
Dockerfile:
FROM python:slim-buster
ENV PYTHONUNBUFFERED 1
ENV PYTHONDONTWRITEBYTECODE 1
WORKDIR /code
COPY rtsp_streaming.py /code/
COPY ConsoleCapture_clipped.mp4 /code
RUN apt update && apt-get update && apt install ffmpeg -y # && apt-get install ffmpeg libsm6 libxext6 -y
CMD ["python", "/code/rtsp_streaming.py"]
rtsp_streaming.py:
import os
os.system("ffmpeg -re -stream_loop 0 -i ConsoleCapture_clipped.mp4 -c copy -f rtsp rtsp://localhost:8554/mystream")
I run the second docker container like so:
docker run --network onprem_network -v ${data_folder}:/code/Philips_MR --name rtsp_streaming -d rtsp_streaming
docker ps -a yields:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
48ea091b870d rtsp_streaming "python /code/rtsp_s…" 18 minutes ago Exited (0) 18 minutes ago rtsp_streaming
5376e070f89f aler9/rtsp-simple-server "/rtsp-simple-server" 19 minutes ago Up 19 minutes 0.0.0.0:8554->8554/tcp rtsp_simple_server
The second container exits quickly with this error:
Connection to tcp://localhost:8554?timeout=0 failed: Cannot assign
requested address
Any suggestions how to fix this?
You should use rtsp_simple_server:8554 instead of localhost.
Since in the container called rtsp_streaming, localhost means rtsp_streaming and in rtsp_simple_server, localhost means rtsp_simple_server`. So you should use the container's name.

Why does docker "--filter ancestor=imageName" find the wrong container?

I have a deployment script that builds new images, stop the existing containers with the same image names, then starts new containers from those images.
I stop the container by image name using the answer here: Stopping docker containers by image name - Ubuntu
But this command stops containers that don't have the specified image name. What am I doing wrong?
See here to watch docker stopping the wrong container:
Here is the dockerfile:
FROM ubuntu:14.04
MAINTAINER j#eka.com
# Settings
ENV NODE_VERSION 5.11.0
ENV NVM_DIR /root/.nvm
ENV NODE_PATH $NVM_DIR/versions/node/v$NODE_VERSION/lib/node_modules
ENV PATH $NVM_DIR/versions/node/v$NODE_VERSION/bin:$PATH
# Replace shell with bash so we can source files
RUN rm /bin/sh && ln -s /bin/bash /bin/sh
# Install libs
RUN apt-get update
RUN apt-get install curl -y
RUN curl https://raw.githubusercontent.com/creationix/nvm/v0.31.0/install.sh | bash \
&& chmod +x $NVM_DIR/nvm.sh \
&& source $NVM_DIR/nvm.sh \
&& nvm install $NODE_VERSION \
&& nvm alias default $NODE_VERSION \
&& nvm use default
RUN apt-get clean
# Install app
RUN mkdir /app
COPY ./app /app
#Run the app
CMD ["node", "/app/src/app.js"]
I build like so:
docker build -t "$serverImageName" .
and start like so:
docker run -d -p "3000:"3000" -e db_name="$db_name" -e db_username="$db_username" -e db_password="$db_password" -e db_host="$db_host" "$serverImageName"
Why not use the container name to differentiate you environments?
docker run -d --rm --name nginx-dev nginx
40ca9a6db09afd78e8e76e690898ed6ba2b656f777b84e7462f4af8cb4a0b17d
docker run -d --rm --name nginx-qa nginx
347b32c85547d845032cbfa67bbba64db8629798d862ed692972f999a5ff1b6b
docker run -d --rm --name nginx nginx
3bd84b6057b8d5480082939215fed304e65eeac474b2ca12acedeca525117c36
Then use docker ps
docker ps -f name=nginx$
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
3bd84b6057b8 nginx "nginx -g 'daemon ..." 30 seconds ago Up 28 seconds 80/tcp, 443/tcp nginx
According to the docs --filter ancestor could be finding the wrong containers if they are in any way children of other containers.
So to be sure my images are separate right from the start I added this line to the start of my dockerfile, after the FROM and MAINTAINER commands:
RUN echo DEVTESTLIVE: This line ensures that this container will never be confused as an ancestor of another environment
Then in my build scripts after copying the dockerfile to the distribution folder I replace DEVTESTLIVE with the appropriate environment:
sed -i -e "s/DEVTESTLIVE/$env/g" ../dist/server/dockerfile
This seems to have worked; I now have containers for all three environments running simultaneously and can start and stop them automatically through their image names.

unable to run docker as non-root user?

I have tried this post and it did NOT help.
I have created jenkins user and add it to docker group.
I have also switched the user in dockerFile (see below).
I started the container as following
docker run -u jenkins -d -t -p 8080:8080 -v /var/jenkins:/jenkins -P docker-registry:5000/bar/helloworld:001
The container starts fine. but when I look at the process, this is what I have
root 13575 1 1 09:34 ? 00:05:56 /usr/bin/docker daemon -H fd://
root 28409 13575 0 16:13 ? 00:00:00 docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 8080 -container-ip 172.17.0.2 -container-port 8080
The first one is daemon. so I guess it is ok to be root.
but the second one (which I have switched to jenkins user by issuing sudo su jenkins) is showing root. I started the docker as jenkins user. why this process belongs to root?
Here is my dockerfile
#copy jenkins war file to the container
ADD http://mirrors.jenkins-ci.org/war/1.643/jenkins.war /opt/jenkins.war
RUN chmod 644 /opt/jenkins.war
ENV JENKINS_HOME /jenkins
RUN useradd -d /home/jenkins -m -s /bin/bash jenkins
USER jenkins
ENV HOME /home/jenkins
WORKDIR /home/jenkins
# Maven settings
RUN mkdir .m2
ADD settings.xml .m2/settings.xml
ENTRYPOINT ["java", "-jar", "/opt/jenkins.war"]
EXPOSE 8080
CMD [""]
EDIT2
I am certain the container the running. I could attach to the container.
I can also browse through web-ui of jenkins, which is only possible if the container has started without errors (jenkins run inside the container)
Here is my command inside the container
ps -ef | grep java
jenkins 1 0 7 19:29 ? 00:00:28 java -jar /opt/jenkins.war
ls -l /jenkins
drwxr-xr-x 2 jenkins jenkins 4096 Jan 11 18:54 jobs
But from the host file system, I see that the newly created "jobs" directory shows as user "admin"
ls -l /var/jenkins/
drwxr-xr-x 2 admin admin 4096 Jan 11 10:54 jobs
Inside the container, the jenkins process (war) is started by "jenkins" user. Once the jenkins starts, it writes to host file system under "admin" user.
Here is my entire dockerFile (NOTE: I don't use the one from here)
FROM centos:7
RUN yum install -y sudo
RUN yum install -y -q unzip
RUN yum install -y -q telnet
RUN yum install -y -q wget
RUN yum install -y -q git
ENV mvn_version 3.2.2
# get maven
RUN wget --no-verbose -O /tmp/apache-maven-$mvn_version.tar.gz http://archive.apache.org/dist/maven/maven-3/$mvn_version/binaries/apache-maven-$mvn_version-bin.tar.gz
# verify checksum
RUN echo "87e5cc81bc4ab9b83986b3e77e6b3095 /tmp/apache-maven-$mvn_version.tar.gz" | md5sum -c
# install maven
RUN tar xzf /tmp/apache-maven-$mvn_version.tar.gz -C /opt/
RUN ln -s /opt/apache-maven-$mvn_version /opt/maven
RUN ln -s /opt/maven/bin/mvn /usr/local/bin
RUN rm -f /tmp/apache-maven-$mvn_version.tar.gz
ENV MAVEN_HOME /opt/maven
# set shell variables for java installation
ENV java_version 1.8.0_11
ENV filename jdk-8u11-linux-x64.tar.gz
ENV downloadlink http://download.oracle.com/otn-pub/java/jdk/8u11-b12/$filename
# download java, accepting the license agreement
RUN wget --no-cookies --header "Cookie: oraclelicense=accept-securebackup-cookie" -O /tmp/$filename $downloadlink
# unpack java
RUN mkdir /opt/java-oracle && tar -zxf /tmp/$filename -C /opt/java-oracle/
ENV JAVA_HOME /opt/java-oracle/jdk$java_version
ENV PATH $JAVA_HOME/bin:$PATH
# configure symbolic links for the java and javac executables
RUN update-alternatives --install /usr/bin/java java $JAVA_HOME/bin/java 20000 && update-alternatives --install /usr/bin/javac javac $JAVA_HOME/bin/javac 20000
# copy jenkins war file to the container
ADD http://mirrors.jenkins-ci.org/war/1.643/jenkins.war /opt/jenkins.war
RUN chmod 644 /opt/jenkins.war
ENV JENKINS_HOME /jenkins
#RUN useradd jenkins
#RUN chown -R jenkins:jenkins /home/jenkins
#RUN chmod -R 700 /home/jenkins
#USER jenkins
RUN useradd -d /home/jenkins -m -s /bin/bash jenkins
#RUN chown -R jenkins:jenkins /home/jenkins
USER jenkins
ENV HOME /home/jenkins
WORKDIR /home/jenkins
# Maven settings
RUN mkdir .m2
ADD settings.xml .m2/settings.xml
USER root
RUN chown -R jenkins:jenkins .m2
USER jenkins
ENTRYPOINT ["java", "-jar", "/opt/jenkins.war"]
EXPOSE 8080
CMD [""]
The second process
root 28409 13575 0 16:13 ? 00:00:00 docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 8080 -container-ip 172.17.0.2 -container-port 8080
is NOT the process for your jenkins container but an internal process of the Docker engine to manage the network.
If, using the ps command, you cannot find the process which is supposed to run in your docker container, that means your docker container isn't running.
To ease figuring this out, start your container with the following command (adding --name test):
docker run --name test -u jenkins -d -t -p 8080:8080 -v /var/foo:/foo -P docker-registry:5000/bar/helloworld:001
Then type docker ps, you should see your container running. If not, type docker ps -a and you should see with which exit code it crashed.
If you need to know why it crashed, display its logs with docker logs test.
To look for the Jenkins process that runs from the official Jenkins docker image, use the following command:
ps aux | grep java
EDIT
why does the files seem to be owned by admin from the docker host point of view?
In your docker image, the jenkins user has UID 1000. You can easily verify this with the following command: docker run --rm -u jenkins --entrypoint /bin/id docker-registry:5000/bar/helloworld:001
uid=1000(jenkins) gid=1000(jenkins) groups=1000(jenkins)
On your docker host, UID 1000 is for the admin user. You can verify this with id admin which in your case shows:
uid=1000(admin) gid=1000(admin) groups=1000(admin),10(wheel)
The users which are available in a Docker container are not the ones from the docker host. However it might happen by coincidence that they have the same UID. This is why the ls -l command run on the docker host will tell you the files are owned by the admin user.
In fact the files are owned by the user of UID 1000 which happens to be named admin on the docker host and jenkins on your docker image.

Cannot publish any docker containers

I had docker 1.7.1, I was following james turbnall the docker book .
created two images (apache, jekyll) , created a jekyll container then
I created apache container as well with the -P tag with EXPOSE 80 in the docker file but as i ran docker port 80 but no public port error.
then i upgraded to docker 1.8.1 but nothing changed I tried -P , -p , EXPOSE and every argument but i can't get it fixed .
My ports column when running Docker ps -a is always empty.
Dockerfile:
FROM ubuntu:14.04
MAINTAINER AbdelRhman Khaled
ENV REFRESHED_AT 2015-09-05
RUN apt-get -yqq update
RUN apt-get -yqq install apache2
VOLUME [ "/var/www/html" ]
WORKDIR /var/www/html
ENV APACHE_RUN_USER www-data
ENV APACHE_RUN_GROUP www.data
ENV APACHE_LOG_DIR /var/log/apache2
ENV APACHE_PID_FILE /var/run/apache2.pid
ENV APACHE_RUN_DIR /var/run/apache2
ENV APACHE_LOCK_DIR /var/lock/apache2
RUN mkdir -p $APACHE_RUN_DIR $APACHE_LOCK_DIR $APACHE_LOG_DIR
EXPOSE 80
ENTRYPOINT [ "/usr/sbin/apache2" ]
CMD [ "-D", "FOREGROUND" ]
commands:
$ sudo docker build -t jamtur01/apache .
$ sudo docker run -d -P --volumes-from james_blog jamtur01/apache
$ sudo docker port container_ID 80
Error: No public port '80/tcp' published for aa99fef6544a
Just changed the Image Tag name and All is working fine now.

Resources