Docker - Connecting to localhost - connection refused - docker

This is my Dockerfile:
FROM sonatype/nexus3:latest
COPY ./scripts/ /bin/scripts/
RUN curl -u admin:admin123 -X GET 'http://localhost:8081/service/rest/v1/repositories'
After running build:
docker build -t test./
The input is:
(7) Failed connect to localhost:8081; Connection refused
Why? Sending requests to localhost (container which is building) is possible only after run it? Or maybe I should add something to Dockerfile?
Thanks for help :)

Why do you want to connect to the service while it is build?
By default the service is not running yet, you'll need to start the container first.
Remove the curl and start the container first
docker run test

Dockerfile is a way to create images, then you create containers from images. Ports are up to serve once the container is up & running. Hence, you can't do a curl while building an image.
Change Dockerfile to -
FROM sonatype/nexus3:latest
COPY ./scripts/ /bin/scripts/
Build image -
docker build -t test .
Create container from the image -
docker run -d --name nexus -p 8081:8081 test
Now see if your container is running & do a curl -
docker ps
curl -u admin:admin123 -X GET 'http://localhost:8081/service/rest/v1/repositories'

Related

Using Weave Net Docker Swarm Adapter to allow services to communicate

I have a Docker Swarm cluster setup as follows:
Setup on node 1
docker swarm init --advertise-addr ${NODE_1_IP} --data-path-port=7789
Setup on node 2
docker swarm join --advertise-addr ${NODE_2_IP} --token XXX ${NODE_1_IP}:2377
I have then installed weave on both nodes as follows.
sudo curl -L git.io/weave -o /usr/local/bin/weave
sudo chmod a+x /usr/local/bin/weave
docker plugin install weaveworks/net-plugin:latest_release
docker plugin disable weaveworks/net-plugin:latest_release
docker plugin set weaveworks/net-plugin:latest_release WEAVE_PASSWORD=XXX
docker plugin enable weaveworks/net-plugin:latest_release
I wanted to set a password because I need the network to be encrypted.
I then set up a network and a service. The constraint makes the service consist of one container running on node 2.
docker network create --driver=weaveworks/net-plugin:latest_release --attachable testnet_weave_encrypted
docker service create --network=testnet_weave_encrypted --name web_encrypted --publish 80 --replicas=1 --constraint 'node.labels.datastore001 == true' nginx:latest
Finally I test it inside another container running on node 1:
docker run --rm --name alpine --net=testnet_weave_encrypted -ti alpine:latest sh
apk add --no-cache curl
curl web_encrypted
This fails with the message:
curl: (7) Failed to connect to web_encrypted port 80: Host is unreachable
I know that web_encrypted is not wrong because when I try a different value I get a different error.
After bashing my head against this wall for hours I have discovered that I can do the following on node 1:
curl web_encrypted.1.lsrdyz8n66jdotaqgdzk9u1uo
And it works!
But of course this is useless to me because the exact container name will change every time the service recreates it.
Is this a bug in the weave plugin or have I missed a step in setting this up?
If you specify endpoint_mode as DNSRR, you should be able to use the service name as well as any aliases you specify. The weavenet network in the snippet below was created using the Weave network plugin.
host#docker service create --network name=weavenet,alias=ngx --endpoint-mode=dnsrr --name nginx nginx
host#docker run -it -d --name alpine --network=weavenet alpine:curl
host#docker exec -it alpine sh
/ # curl nginx
<!DOCTYPE html>
<html>
....
/ # curl ngx
<!DOCTYPE html>
<html>

Integrate two docker apps - Orthanc and OVIYAM

I am trying to start two docker services. One is Orthanc and other is OVIYAM image viewer. My objective is to be able to view the images that I uploaded in Orthanc in Oviyam.
Step 1 - Upload images in Orthanc
Step 2 - View them in Oviyam
Though am currently able to start these two services, I am not able to integrate these two. I mean I did provide the listening port for OVIYAM which is 1025 in Orthanc.json.
To start Orthanc, I execute the below command
docker run -p 4242:4242 -p 8042:8042 --rm --name orthanc -v /home/test/abcd/abc/new_orthanc/orthanc.json:/etc/orthanc/orthanc.json -v /home/test/abcd/abc/new_orthanc/orthanc-db:/var/lib/orthanc/db jodogne/orthanc-plugins /etc/orthanc --verbose
To start Oviyam, I execute the below command
docker run -it --rm --name oviyam -p 8081:8080 -p 1025:1025 -v /home/test/abcd/abc/oviyam/data/:/usr/local/tomcat/work oviyam:2.7.1
I got the docker files for OVIYAM from this link (https://github.com/mocsharp/oviyam-docker) if that can help.
Though I am able to launch these services successfully, am not sure how I can integrate these two?
Am not sure how to setup this connection/integrate these two apps. Can you please help?
Depends on how those application communicate. If they talk to each other through network requests, you could use something like Docker Compose to start and link them together (https://docs.docker.com/compose/ , https://dev.to/mozartted/docker-networking--how-to-connect-multiple-containers-7fl).

Can't access webserver of airflow after run the container

I pulled the latest version of airflow image from docker hub.
apache/airflow.
And I tried to run a container base on this image.
docker run -d -p 127.0.0.1:5000:5000 apache/airflow webserver
The container is running and the status of port is fine. But I still can't access the airflow webserver from my browser.
This site can’t be reached.
127.0.0.1 refused to connect.
After few minutes, the container will stop automatically.
Is there anyone could advise?
I don't have experience with airflow, but this is how you fix this image to run:
First of all you have to overwrite the entrypoint because the existing one doesn't help a lot. From what I understand this image needs 2 steps in order to run: initdb and webserver. For this reason the existing entrypoint is not useful.
Run:
docker run -p 5000:8080 --entrypoint /bin/bash -ti apache/airflow
This will open a shell inside a running container. Also note that I mapped port 8080 inside the container.
Then inside the container run:
airflow db init
airflow webserver -p 8080
Note that in older versions of airflow, the command to initialize the database is airflow initdb, instead of airflow db init.
Open a browser and navigate to http://localhost:5000
When you close the container your work is gone thou ;)
Another thing you can do is put the 2 airflow commands in a bash script and map that script inside the container and use it as entrypoint. Something like this:
docker run -p 5000:8080 -v $(pwd)/startup.sh:/opt/airflow/startup.sh --entrypoint /opt/airflow/startup.sh -d --name airflow apache/airflow
You should make startup.sh executable before running this.
Let me know if you run into issues.

docker command not running

I am new to docker and trying to run the following codes and getting the error below.
Nihits-MacBook-Pro:~ nihit$ docker container run --publish 80:80 nginx
Unable to find image 'nginx:latest' locally
latest: Pulling from library/nginx
bc95e04b23c0: Pull complete
110767c6efff: Pull complete
f081e0c4df75: Pull complete
Digest: sha256:004ac1d5e791e705f12a17c80d7bb1e8f7f01aa7dca7deee6e65a03465392072
Status: Downloaded newer image for nginx:latest
docker: Error response from daemon: driver failed programming external connectivity on endpoint gracious_pare (0a28a065694108085e2b7533870d9d84889899baf5d4130c58c49c4736bb6b12): Error starting userland proxy: Bind for 0.0.0.0:80: unexpected error (Failure EADDRINUSE).
ERRO[0016] error waiting for container: context canceled
I tried different codes with port but all of them get stuck and don't do anything.
Nihits-MacBook-Pro:~ nihit$ docker container run --publish 3000:80 nginx
Nihits-MacBook-Pro:~ nihit$ docker container run --publish 8080:80 nginx
None of them work and just are stuck on the terminal.
This should work
docker container run --publish 3000:80 nginx:latest
Since I read the above conversation, it looks like you recieved a long string of number meaning that docker is running, just hit the url localhost:3000 you will see nginx running.
Normally port:80 is used by php if you have apache installed on your computer.
If it gets stuck it also means that the docker is running but not in the background.
Normally the --detach or -d means that the docker will provide you the long string of numbers which tells docker to run the app in the background so you won't see anything happening in the terminal.
Would you mind to try the below command to start the nginx again?
$ docker run -d -p 80:80 nginx:latest
BTW, all commands which start with "docker container" seems the new commands from docker.
But, according to the https://docs.docker.com/edge/engine/reference/commandline/docker/,
the function of "docker container run" should be same as "docker run".
Not sure why difference between those two commands.
In my cases, I'm seldom to use the commands which start with "docker container" to execute my container.
If the container successfully started, the shell will return the message such as follows:
sh-3.2# docker run -d -p 8080:80 nginx:latest
b0a5aa7965119c5b2705392b5b9e9640a4ab8edefda6722ee86da507229cdf05
sh-3.2#
sh-3.2# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED
b0a5aa796511 nginx:latest "nginx -g 'dae... About a minute ago...

Docker container with Blazegraph Triple Store not working possibly due to networking

I'm preparing a Docker image to teach my students the basics of Linked Data. I want them to actually prepare proper RDF and simulate the process of publishing it on the web as Linked Data, so I have prepared a Docker image comprising:
Triple Store: Blazegraph, listening to port 9999.
GRefine. I have copied an instance of Open Refine, with the RDF extension included. Listening to port 3333.
Linked Data Server: I have copied an instance of Jetty, with Pubby inside it. Listening to port 8080.
I have tested the three in my localhost (runing Ubuntu 14.04) and they work fine. This is the Dockerfile I'm using to build the image:
FROM ubuntu:14.04
MAINTAINER Mikel Egaña Aranguren <my.email#x.com>
RUN apt-get update && apt-get install -y openjdk-7-jre wget curl
RUN mkdir /LinkedDataServer
COPY google-refine-2.5 /LinkedDataServer/google-refine-2.5
COPY blazegraph /LinkedDataServer/blazegraph
COPY jetty /LinkedDataServer/jetty
EXPOSE 9999
EXPOSE 3333
EXPOSE 8080
WORKDIR /LinkedDataServer
CMD java -server -jar blazegraph/bigdata-bundled.jar
CMD google-refine-2.5/refine -i 0.0.0.0
WORKDIR /LinkedDataServer/jetty
CMD java -jar start.jar jetty.port=8080
I run the container and it does map the appropriate ports:
docker run -d -p 9999:9999 -p 3333:3333 -p 8080:8080 mikeleganaaranguren/linked-data-server:0.0.1
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a08709d23acb mikeleganaaranguren/linked-data-server:0.0.1 /bin/sh -c 'java -ja 5 seconds ago Up 4 seconds 0.0.0.0:3333->3333/tcp, 0.0.0.0:8080->8080/tcp, 0.0.0.0:9999->9999/tcp dreamy_engelbart
The triple store, for example, seems to be working. If I go to 127.0.0.1:9999, I can access the triple store:
However, if try to do anything (queries, upload data, ...), the triple store simply fails with an "ERROR: Could not contact server". Since the same setting works on the host, I assume I'm doing something wrong with Docker. I have tried with -P instead of mapping the ports, and with --net=host, but I get the same error.
PS: Jetty also fails in the same fashion, and GRefine is not even working.
You'll need to make sure to use the IP of the docker container to access the Blazegraph instance. Outside of the container, it will not be running on 127.0.0.1, but rather the IP assigned to the docker container.
You'll need to run something like
docker inspect --format '{{ .NetworkSettings.IPAddress }}' "CONTAINER ID"
Where CONTAINER ID is the value of your docker instance.

Resources