Running 2 services - neo4j

I'm building an image for github's Linkurious project, based on an image already in the hub for the neo4j database. the neo image automatically runs the server on port 7474 and my image runs on port 8000.
when I run my image I publish both ports (could I do this with EXPOSE?):
docker run -d --publish=7474:7474 --publish=8000:8000 linkurious
but only my server seems to run. if I hit http://[ip]:7474/ I get nothing. is there something special I have to do to make sure they both run?
* Edit I *
here's my Dockerfile:
FROM neo4j/neo4j:latest
RUN apt-get -y update
RUN apt-get install -y git
RUN apt-get install -y npm
RUN apt-get install -y nodejs-legacy
RUN git clone git://github.com/Linkurious/linkurious.js.git
RUN cd linkurious.js && npm install && npm run build
CMD cd linkurious.js && npm start
* Edit II *
to perhaps help explain my quandary, I've asked a different question

EXPOSE is there to allow inter-containers communication (within the same docker daemon), with the docker run --link option.
Port mapping is there to map EXPOSEd ports to the host, to allow client-to-container communication. So you need --publish.
See also "Difference between “expose” and “publish” in docker".
See also an example with "Advanced Usecase with Docker: Connecting Containers"
Make sure though that the ip is the right one ($(docker-machine ip default)).
If you are using a VM (meaning, you are not using docker directly on a Linux host, but on a Linux VM with VirtualBox), make sure the mapped ports 7474 and 8000 are port forwarded from the host to the VM.
VBoxManage controlvm boot2docker-vm natpf1 "name,tcp,,7474,,7474"
VBoxManage controlvm boot2docker-vm natpf1 "name,tcp,,8000,,8000"
In the OP's case, this is using neo4j: see "Neo4j with Docker", based on the neo4j/neo4j/ image and its Dockerfile:
ENTRYPOINT ["/docker-entrypoint.sh"]
CMD ["neo4j"]
It is not meant to be used for installing another service (like nodejs), where the CMD cd linkurious.js && npm start would completely override the neo4j base image CMD (meaning neo4j would never start).
It is meant to be run on its own:
# interactive with terminal
docker run -i -t --rm --name neo4j -v $HOME/neo4j-data:/data -p 8474:7474 neo4j/neo4j
# as daemon running in the background
docker run -d --name neo4j -v $HOME/neo4j-data:/data -p 8474:7474 neo4j/neo4j
And then used by another image, with a --link neo4j:neo4j directive.

Related

Docker Toolbox refused to connect on the browser - Tried different solutions - Windows 7

I have installed Docker Toolbox on Windows 7.
Everything has been installed correctly.
Now I try to build and run a DockerFile.
Dockerfile
FROM debian:9
RUN apt-get update -yq \
&& apt-get install curl gnupg -yq \
&& curl -sL https://deb.nodesource.com/setup_10.x | bash \
&& apt-get install nodejs -yq \
&& apt-get clean -y
ADD . /app/
WORKDIR /app
RUN npm install
VOLUME /app/logs
CMD npm run start
After successfully running the command line docker build -t test . and docker run -it -d -p 3306:3306 test, I try to access it via my browser by doing :
http://192.168.99.100:3306
which correspond to http://[docker-machine-ip]:port
But it refuses to connect.
After searching on the internet, I tried several solutions:
1. Use the container IP
docker inspect --format '{{ .NetworkSettings.IPAddress }}' [id]
http://[containerIP]:port
2. Add port forwarding on Oracle VM defaut machine
VirtualBox -> Machine settings -> Network -> Adapter 1 (NAT) -> Advanced, Port Forwarding
name : test
Host ip : 127.0.0.1
Host port : 3306
Guest port : 3306
I even tried by putting Guest IP to 192.168.99.100 and letting Host IP to empty.
3. Try different ports
I tried differents ports to see if it was not caused by a port already opened.
I even tried the option --publish-all (-P) but as a result, I don't have any ports showing on docker ps -a
docker run -it -d -P test
4. Deactivate the windows firewall
Both public and private.
None of those solutions worked for me and I don't know what to do next.
Any help ? I would appreciate. Thank you.

What should I do if exposing ports in a Dockerfile do not take effect?

I have the following Dockerfile to run an Nginx server but I can't seem to get Docker to expose port 80 thru my host machine so I can access it externally:
FROM ubuntu:latest
EXPOSE 80
RUN apt-get update
RUN apt-get -y install apt-utils
RUN apt-get -y dist-upgrade
RUN apt-get -y install nginx
CMD service nginx start
If I run the following command after building the image, docker run -p 80:80 -d nginxserver, I can get the correct global settings to take effect, however my newly created Docker container does not run persistently and it exits after a brief second.
If I try docker run -it /bin/bash -d nignxserver, this will allow my Docker container to work, however I won't be able to connect to the Nginx server outside the host machine.
If I try docker run -p 80:80 -it /bin/bash -d nignxserver, this will fail with the following error message:
docker: Error response from daemon: OCI runtime create failed:
container_linux.go:348: starting container process caused "exec:
\"-it\": executable file not found in $PATH": unknown.
What would be the correct solution here?
The best solution is just to use the standard nginx image, if you're not really going to customize the image at all.
If you're writing a custom image, you should broadly assume commands like service just don't work. The CMD of the image you show (assuming it's successful) attempts to launch nginx as a background service; once it's started in the background, the container's main process has finished and the container exits. The CMD generally needs to launch the single process that the container runs in the foreground.
In terms of your various docker run gyrations, the options always come in the same order:
docker run \
-d -p 80:80 \ # docker-specific options
nginxserver \ # the image name
nginx -g 'daemon off;' # the command to run and its options
If you specify an alternate command (like /bin/bash) that runs instead of the main container process, and if the container normally would have run a network server, you get the shell instead. /bin/bash is a command and not an argument to -it; the same breakdown would be
docker run \
--rm -i -t \ # docker-specific options
nginxserver \ # the image name
/bin/bash # the command to run and its options
You need to run with --privileged to listen on ports under 1024.
Also, service nginx start exist immediately (This is covered by David Maze)
You should instead use CMD nginx

Docker toolbox volumes on windows doesn't refresh changes on container

I am starting with docker on windows and I am trying to use volumes for manage data in containers.
My host environment is a:
Windows 8.1
Docker Toolbox 1.8.
Virtual Box 5.0.6
I've created a ngnix image using the following Dockerfile.
Dockerfile
FROM centos:6.6
MAINTAINER afym
ENV WEBPORT 80
RUN yum -y update; yum clean all
RUN yum -y install epel-release; yum clean all
RUN yum -y install nginx; yum clean all
RUN echo "daemon off;" >> /etc/nginx/nginx.conf
VOLUME /usr/share/nginx/html
EXPOSE $WEBPORT
CMD [ "/usr/sbin/nginx" ]
I've created a ngnix container using the following command.
docker run -d --name nge -v //c/Users/src:/usr/share/nginx/html -p 8082:80 ng1
b738fef9cc4d135416a8cca4caf869acf944319b7c3c61129b11f37f9d891598
Then I go to my browser and I can see the web page:
However when I make a change on my index.html file it doesn't refresh on browser
Editing my file
On my browser (ctrl + f5)
I went to the VirtualBox machine to check if my shared directories options is ok.
Then I inspect my nge container with the following command.
docker inspect ng1
Docker inspect
What is happening with volumes? Why I can not see my changes?
After a couple of days I could find the solution.
Firstable docker on windows even on MAC uses a boot2docker instance on VirtualBox.
Diagrams
On MAC
On Windows
Next, the official docker's documentation says :
docker volume
Docker Machine tries to auto-share your /Users (OS X) or C:\Users (Windows) directory
However, after find a solution I decided to change the default c/Users to another path just for keep order. With this in mind I did the following steps:
Define your own workspace directory. In my case is /e/arquitectura (optional. If you want you can use the default path which is /c/Users)
Verify the configuration on the Virtual Machine (In default machine go to > Configuration > Share directories)
Join to the default machine and mount the directory using the alias name
sudo mount -t vboxsf alias-name-virtualbox some-path-in-boot2docker
# In my case (boot2docker instance)
$ cd
$ mkdir arquitectura
$ sudo mount -t vboxsf arquitectura /arquitectura
Finally create a new container or restart an existing one if you haven't changed the c/user/ path
# In my case (docker client)
$ docker run -d --name nge -v //arquitectura/src:/usr/share/nginx/html -p 8081:80 ng1
Now it works.

Is it possible to access Hbase installed inside docker container to be accessed using java client on mac OSX?

I created a docker container that have HBase installed in standalone mode. I used -net=host mode to run docker container. I can see UI for master and regionserver but when I am trying to connect to HBase from my java program after establishment of connection with zookeeper, it says that This server is in the failed servers list: boot2docker:60020. I am using mac OSX and boot2docker. Please give suggestion to this. Here is my dockerfile.
FROM centos:6
# Install required libraries.
RUN yum install -y tar
# Install java.
RUN curl -LO \
'http://download.oracle.com/otn-pub/java/jdk/7u71-b14/jdk-7u71-linux-x64.rpm'\
-H 'Cookie: oraclelicense=accept-securebackup-cookie'
RUN rpm -i jdk-7u71-linux-x64.rpm
RUN rm -f jdk-7u71-linux-x64.rpm
# Export JAVA_HOME.
ENV JAVA_HOME /usr
# Copy hbase code to docker container.
COPY hbase-*.tar.gz /
RUN tar -xzvf hbase-*.tar.gz
RUN rm hbase-*.tar.gz
RUN mv hbase-* hbase
# Copy hbase-site.xml.
ADD hbase-config-files/hbase-site.xml /hbase/conf/hbase-site.xml
# Start Hbase.
CMD ["./hbase/bin/hbase", "master", "start"]`
To run this container I used docker run --net=host -t docker_image

Docker client execution

I have a very basic doubt regarding docker.
I have a docker host installed in ubuntuA.
So, to test this from the client(UbuntuB) , should the docker be installed in UbuntuB machine also?
More correct answer is "only docker client" need to be installed in UbuntuB
In UbuntuB, install docker client only, it is around 17M
# apt-get update && apt-get install -y curl
# curl https://get.docker.io/builds/Linux/x86_64/docker-latest -o /usr/local/bin/docker
# chmod +x /usr/local/bin/docker
In order to run docker command, you need talk to daemon in ubuntuA (port 2375 is used since docker 1.0)
$ docker -H tcp://ubuntuA:2375 images
or
$ export DOCKER_HOST tcp://ubuntuA:2375
$ docker images
see more detail in http://docs.docker.com/articles/basics/
Yes,
you have to install docker on both client and server.

Resources