expose port inside docker at docker run - docker

I just want to expose port 5555 which is bind to celery flower, to host ip and the port.Could someone please help on this?
Below is some part of dockerfile.
Make port 80 available to the world outside this container
EXPOSE 5555
Define environment variable
ENV NAME worker-app
create paths
RUN /etc/init.d/celeryd create-paths
clear symfony app cache
RUN cd /srv/clickhq/ && rm -rf var/cache/*
RUN chown -R lighthouse:lighthouse /srv/clickhq/
clear php app cache
USER lighthouse
RUN cd /srv/clickhq/ && ./clearcache.sh
Start celeryd, celerybeat and php-fpm services when the container launches
Blockquote
USER root
RUN chown -R lighthouse:lighthouse /var/run/celery/ && chown -R lighthouse:lighthouse /var/log/celery/
RUN chmod -R 755 /var/log/celery/ && chmod -R 755 /var/run/celery/
RUN chown -R lighthouse:lighthouse /srv/clickhq/
ENTRYPOINT sudo service celeryd start && sudo service celerybeat start && service php7.0-fpm start && service rsyslog start && /usr/bin/python /usr/local/bin/flower -A celery --broker=redis://password#192.168.51.4:6379/0 && bash
Blockquote
Docker run command Im using is
"sudo docker run -it --rm --name worker-app -d worker-app --privileged -p 192.168.51.3:5555:5555 --net="bridge"

The problem is that you are really not passing the argument -p 80:5555 to docker run, but to the entrypoint.
In this command sudo docker run -it --rm --name worker-app -d worker-app --privileged -p 192.168.51.3:5555:5555 --net="bridge", worker-app is the image name, so everything after it (--privileged -p 192.168.51.3:5555:5555 --net="bridge") is a parameter for the entrypoint.
It should work if you change the image name to the end:
sudo docker run -it --rm --name worker-app -d --privileged -p 80:5555 --net="bridge worker-app

Related

Docker container exited instantly with code (127)

In the log file I have this error:
./worker: error while loading shared libraries: libcares.so.2: cannot open shared object file: No such file or directory
I tried everything with the library it exists and its linked to the path.
My Dockerfile :
FROM ubuntu:20.04
RUN apt update -y && apt install libssl-dev -y
WORKDIR /worker
COPY build/worker ./
COPY build/lib /usr/lib
EXPOSE 50051
CMD ./worker
My makefile:
all: clean build
build:
mkdir -p build/lib && \
cd build && cmake .. && make
clean:
rm -rf build
clean-containers :
docker container stop `docker container ls -aq`
docker container rm `docker container ls -a -q`
create-workers :
docker run --name worker1 -p 2001:50051 -d workerimage
docker run --name worker2 -p 2002:50051 -d workerimage
docker run --name worker3 -p 2003:50051 -d workerimage
docker run --name worker4 -p 2004:50051 -d workerimage
docker run --name worker5 -p 2005:50051 -d workerimage
docker run --name worker6 -p 2006:50051 -d workerimage
docker run --name worker7 -p 2007:50051 -d workerimage
docker run --name worker8 -p 2008:50051 -d workerimage
docker run --name worker9 -p 2009:50051 -d workerimage
docker run --name worker10 -p 2010:50051 -d workerimage
make sure libcares.so.2 and other shared libraries are present inside /usr/lib of the container.

SOnarqube using docker and how to run it?

i need to install SonarQube using my docker.
i tried this below code to install
`FROM ubuntu:14.04
RUN apt-get update
RUN apt-get -y install unzip curl openjdk-7-jre-headless
RUN cd /tmp && curl -L -O
https://sonarsource.bintray.com/Distribution/sonarqube/sonarqube-7.0.zip
RUN unzip /tmp/sonarqube-7.0.zip
EXPOSE 9000
CMD ["chmod +x","/tmp/sonarqube-7.0/bin/linux-x86-64/sonar.sh"]
CMD ["/sonarqube-7.0/bin/linux-x86-64/sonar.sh","start"]`
its build is successful.
MY QUESTION IS:
1.how can i run it on server?
I tried "docker run -d --name image -p 9000:9000 -p 9092:9092 sonarqube"
but its not connecting..can anyone help me from here or do i need to change in script??
Try below steps.
Modify the Dockerfile last line to:
RUN echo "/sonarqube-7.0/bin/linux-x86-64/sonar.sh start" >> .bashrc
Rebuild the image
Start a container:
docker run -d --name image -p 9000:9000 -p 9092:9092 sonarqube /bin/bash

Docker container share screen over VNC

I am trying to follow the following dockerfile to create a container with qtCreator and run the container with the following command
docker run -it --rm -e DISPLAY=$DISPLAY -v /tmp/.X11-unix:/tmp/.X11-unix qtcreator
however this is throwing an error as below
PS D:\Docker\qtcreator> docker run -it --rm -e DISPLAY=$DISPLAY -v /tmp/.X11-unix:/tmp/.X11-unix qtcreator
QXcbConnection: Could not connect to display
Aborted
I tried to change the file a bit that looks as below based on some research I did to install VNC on this container.
FROM ubuntu:14.04
# Install vnc, xvfb in order to create a 'fake' display and qtcreator
RUN apt-get update && apt-get install -y qtcreator x11vnc xvfb
run mkdir ~/.vnc
# Setup a password
run x11vnc -storepasswd 1234 ~/.vnc/passwd
RUN export uid=1000 gid=1000 && \
mkdir -p /home/developer && \
echo "developer:x:${uid}:${gid}:Developer,,,:/home/developer:/bin/bash" >> /etc/passwd && \
echo "developer:x:${uid}:" >> /etc/group && \
echo "developer ALL=(ALL) NOPASSWD: ALL" > /etc/sudoers.d/developer && \
chmod 0440 /etc/sudoers.d/developer && \
chown ${uid}:${gid} -R /home/developer
USER developer
ENV HOME /home/developer
CMD /usr/bin/qtcreator
after this I tried to run this container using the following command
docker run -p 5900 -it --rm -v /tmp/.X11-unix:/tmp/.X11-unix qtcreator x11vnc -forever -usepw -create
this command seams to run and wait for termination I think, as the PS does not return back.
as I am new to docker can some one please let me know how to connect my VNC Client on my windows 10 parent machine or from a remote machine to the vnc server running in this container. i.e. how do I find the IP Address and port number to connect.
UPDATE 1
when i run the command docker ps --filter "status=running" I see the following log
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a2ab1397bb2b qtcreator "x11vnc -forever -..." About an hour ago Up About an hour 0.0.0.0:32770->5900/tcp clever_haibt
69754e382042 qtcreator "x11vnc -forever -..." 2 hours ago Up 2 hours priceless_hugle

docker image - centos 7 > ssh service not found

I installed docker image - centos 7 on my ubuntu machine. But ssh service not found. so I cant run this service.
[root#990e92224a82 /]# yum install openssh-server openssh-clients
Loaded plugins: fastestmirror, ovl
Loading mirror speeds from cached hostfile
* base: mirror.dhakacom.com
* extras: mirror.dhakacom.com
* updates: mirror.dhakacom.com
Package openssh-server-6.6.1p1-31.el7.x86_64 already installed and latest version
Package openssh-clients-6.6.1p1-31.el7.x86_64 already installed and latest version
Nothing to do
[root#990e92224a82 /]# ss
ssh ssh-agent ssh-keygen sshd ssltap
ssh-add ssh-copy-id ssh-keyscan sshd-keygen
How can I remotely login docker image?
You have to do the following instructions on Dockerfile.
RUN yum install -y sudo wget telnet openssh-server vim git ncurses-term
RUN useradd your_account
RUN mkdir -p /home/your_account/.ssh && chown -R your_account /home/your_account/.ssh/
# Create known_hosts
RUN touch /home/your_account/.ssh/known_hosts
COPY files/authorized_keys /home/your_account/.ssh/
COPY files/config /home/your_account/.ssh/
COPY files/pam.d/sshd /etc/pam.d/sshd
RUN touch /home/your_account/.ssh/environment
RUN chown -R your_account /home/your_account/.ssh
RUN chmod 400 -R /home/your_account/.ssh/*
RUN chmod 700 -R /home/your_account/.ssh/known_hosts
RUN chmod 700 /home/your_account/.ssh/environment
# Enable sshd
COPY files/sshd_config /etc/ssh/
RUN ssh-keygen -f /etc/ssh/ssh_host_rsa_key -N '' -t rsa
# Add a account into sudoers and this account doesn't need to type his password
COPY files/sudoers /etc/
COPY files/start.sh /root/
I have to remove "pam_nologin.so" on the file /etc/pam.d/sshd, because when I upgrade the openssh-server's version to openssh-server-6.6.1p1-31.el7, the pam_nologin.so will disallow remote login for any users even the file /etc/nologin is not exist.
start.sh
#!/bin/bash
/usr/sbin/sshd -E /tmp/sshd.log
Start centos container
docker run -d -t -p $(sshPort):22 --name $(containerName) $(imageName) /bin/bash
docker exec -d $(containerName) bash -c "sh /root/start.sh"
Login container
ssh $(Docker ip) $(sshPort)
In extend to #puritys
You could do this in the Dockerfile instead
Last in the file:
ENTRYPOINT /usr/sbin/sshd -E /tmp/sshd.log && /bin/bash
Then you will only need to run:
docker run -d -p -t $(sshPort):22 --name $(containerName) $(imageName) /bin/bash

Issue getting memcache container to automatically start in Docker

I have a container that is being built that only really contains memcached, and I want it to start once the container is built.
This is my current Docker file -
FROM centos:7
MAINTAINER Some guy <someguy#guysome.org>
RUN yum update -y
RUN yum install -y git https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
RUN yum install -y ansible && yum clean all -y
RUN yum install -y memcached
RUN (cd /lib/systemd/system/sysinit.target.wants/; for i in *; do [ $i == systemd-tmpfiles-setup.service ] || rm -f $i; done); \
rm -f /lib/systemd/system/multi-user.target.wants/*;\
rm -f /etc/systemd/system/*.wants/*;\
rm -f /lib/systemd/system/local-fs.target.wants/*; \
rm -f /lib/systemd/system/sockets.target.wants/*udev*; \
rm -f /lib/systemd/system/sockets.target.wants/*initctl*; \
rm -f /lib/systemd/system/basic.target.wants/*;\
rm -f /lib/systemd/system/anaconda.target.wants/*;
VOLUME [ "/sys/fs/cgroup" ]
EXPOSE 11211/tcp 11211/udp
CMD ["/usr/bin/memcached"]
#CMD ["/usr/bin/memcached -u root"]
#CMD ["/usr/bin/memcached", "-D", "FOREGROUND"]
The container builds successfully, but when I try to run the container using the command
docker run -d -i -t -P <image id>, I cannot see the image inside of the list that is returned with docker ps.
I attempted to have my memcached service run the same way as my httpd container, but I cannot pass in the argument using the -D flag (since its already a daemon im guessing). This is how my httpd CMD was set up -
CMD ["/usr/sbin/httpd", "-D", "FOREGROUND"]
Locally, if I run the command /usr/bin/memcached -u root it runs as a process, but when I try in the container CMD it informs me that it cannot find the specified file (having to do with the -u root section I am guessing).
Setting the CMD to /bin/bash still did not allow the service to start either.
How can I have my memcached service run and allow it to be seen when I run docker ps, so that I can open a bash section inside of it?
Thanks.
memcached will run in the foreground by default, which is what you want. The -d option would run memcached as a daemon which would cause the container to exit immediately.
The Dockerfile looks overly complex, try this
FROM centos:7
RUN yum update -y && yum install -y epel-release && yum install -y memcached && yum clean all
EXPOSE 11211
CMD ["/usr/bin/memcached","-p","11211","-u","memcached","-m","64"]
Then you can do what you need
$ docker build -t me/memcached .
<snipped build>
$ CID=$(docker create me/memcached)
$ docker start $CID
4ac5afed0641f07f4694c30476cef41104f6fd864c174958b971822005fd292a
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
4ac5afed0641 me/memcached "/usr/bin/memcached -" About a minute ago Up 4 seconds 11211/tcp jovial_bardeen
$ docker exec $CID ps -ef
UID PID PPID C STIME TTY TIME CMD
memcach+ 1 0 0 01:03 ? 00:00:00 /usr/bin/memcached -p 11211 -u memcached -m 64
root 10 0 2 01:04 ? 00:00:00 ps -ef
$ docker exec -ti $CID bash
[root#4ac5afed0641 /]#
Or skip your Dockerfile if it actually only runs memcached and use:
docker run --name my-memcache -d memcached
At least to get your basic set-up going, and then you can update that official image as needed.

Resources