Can't access webserver of airflow after run the container - docker

I pulled the latest version of airflow image from docker hub.
apache/airflow.
And I tried to run a container base on this image.
docker run -d -p 127.0.0.1:5000:5000 apache/airflow webserver
The container is running and the status of port is fine. But I still can't access the airflow webserver from my browser.
This site can’t be reached.
127.0.0.1 refused to connect.
After few minutes, the container will stop automatically.
Is there anyone could advise?

I don't have experience with airflow, but this is how you fix this image to run:
First of all you have to overwrite the entrypoint because the existing one doesn't help a lot. From what I understand this image needs 2 steps in order to run: initdb and webserver. For this reason the existing entrypoint is not useful.
Run:
docker run -p 5000:8080 --entrypoint /bin/bash -ti apache/airflow
This will open a shell inside a running container. Also note that I mapped port 8080 inside the container.
Then inside the container run:
airflow db init
airflow webserver -p 8080
Note that in older versions of airflow, the command to initialize the database is airflow initdb, instead of airflow db init.
Open a browser and navigate to http://localhost:5000
When you close the container your work is gone thou ;)
Another thing you can do is put the 2 airflow commands in a bash script and map that script inside the container and use it as entrypoint. Something like this:
docker run -p 5000:8080 -v $(pwd)/startup.sh:/opt/airflow/startup.sh --entrypoint /opt/airflow/startup.sh -d --name airflow apache/airflow
You should make startup.sh executable before running this.
Let me know if you run into issues.

Related

How to run sitespeed.io in apache/ngnix server?

I have recently heard about sitespeed.io and started using it to measure performance of my site.
I am running it in a docker container on my gcp cloud instance.
The problem is everytime i run the command it stores the result in a particular directory sitespeed-result and then I need to copy the whole thing on my local windows machine to view index.html file.
Is it possible to run this on a server like apache? I mean for example I can run an apache container on my docker host but how do i map this sitespeed io result so that it can be available using http://my-gcp-instance:80 where my apache container is running on port 80.
sudo docker run -v "$(pwd)":/sitespeed.io sitespeedio/sitespeed.io:13.3.0 https://mywebsite.com
Sorry for posting thr question this but I got it working.
sudo docker run -dit --name my-apache -p 8080:80 -v "$(pwd)":/usr/local/apache2/htdocs/ httpd:2.4
(pwd) is where i am storing the sitespeed results.

How to start nginx in Docker on Windows

I am using Windows 10 and I have installed Docker and pulled nginx:
docker pull nginx
I started nginx with this command:
docker run -dit --rm --name nginx -p 9001:80 nginx
And simple page is available on localhost:9001.
I would like to pass nginx.conf file to nginx. Also, I would like to give it a folder root, so that on localhost:9001 I see static page D:/nginx/website_files/index.html. In folder website_files there are also other static pages.
How to pass nginx.conf and folder path to nginx in Docker on Windows?
I started using Kitematic and pulled hello-world-nginx. With it I was able to browse files by clicking on Volumes -> /website_files. On path that opens, other static files can be added. After that nginx can be restarted and it increments port by 1. Port number can be seen with docker ps.
To change nginx config file, after starting nginx I run this command docker cp D:/nginx/multi.conf b3375f37a95c:/etc/nginx/nginx.conf where b3375f37a95c is container id obtained from docker ps command. After that nginx should be restarted from Kitematic.
If you only want to edit nginx.conf instead of completely changing it, you can first get current conf file with docker cp b3375f37a95c:/etc/nginx/nginx.conf D:/nginx/multi.conf, edit multi.conf and than copy it back as before.
You can use host volume mapping
-v <host-directory>:<container-path>
for example:
docker run -dit --rm -v d:/data:/data -p 9001:80 nginx /bin/sh
Try with this in PS :
PS C:\> docker run --name myNGinX -p 80:80 -p 443:443 -v C:\Docker\Volumes\myNGinX\www\:/etc/nginx/www/:ro -v C:\Docker\Volumes\myNGinX\nginx.conf:/etc/nginx/conf.d/:ro -d nginx
Late to the answer-party, and shameless self-promotion, but I created a repo using Docker-compose having an Nginx proxy server and 2 other websites all in Containers.
Check it out here

Link docker containers (Drupal and MariaDB)

To start I built a docker container from the MariaDB docker image.
After that I loaded a database dumpfile in the running container.
[MariaDB status][1]
Everything goes fine.
When I want to run/link the Drupal image:
docker run --name drupaldocker --link mariadbdocker:mariadb -p 8089:80 -d drupal
I can reach the drupal installation page, but when I want to load the database I always have the same errors:
-host, pass or dbname is wrong.
But I'm pretty sure my credentials are right.
It seems that my drupal container can't find the mariadb image.
Docker links is a deprecated feature and should be avoided: https://docs.docker.com/engine/userguide/networking/default_network/dockerlinks/
I assume you have a container named mariadbdocker running.
If you gain bash access inside drupaldocker container, you should be able to ping mariadb alias like this:
docker run --name drupaldocker --link mariadbdocker:mariadb -p 8089:80 -it drupal /bin/bash
If ping succeeds then you probably still have credentials issue.

Add xserver into Docker container (the host is headless)

I'm building a Docker container which have maven and some dependencies. Then it execute a script inside the container. It seems, one of that dependencies needs an Xserver to work. Nothing is shown on screen but it seems necessary and can't be avoided.
I got it working putting an ENV DISPLAY=x.x.x.x:0 on Dockerfile and it connects to the external Xserver and it works. But the point is to make a Docker self-sufficient container.
So I need to add a Xserver to my container adding in Dockerfile the necessary. And I want that Xserver only accessible by the Docker container itself and not externally.
The FROM of my Dockerfile is FROM ubuntu:15.04 and that is unchangeable because my Dockerfile have a lot of things depending of that specific version.
I've read some post about how to connect from docker container to Xserver of the Docker host machine, like this. But as I put in question's title, the Docker host is headless and doesn't have Xserver.
Which would be the minimum apt-get packages to install into the container to have a Xserver?
I guess in my Dockerfile will be needed the display environment var like ENV DISPLAY=:0. Is this correct?
Is anything else needed to be added in docker run command?
Thank you.
You can install and run a x11vnc inside your docker container. I'll show you how to make it running on a headless host and connect it remotely to run X applications(e.g. xterm).
Dockerfile:
FROM joprovost/docker-x11vnc
RUN mkdir ~/.vnc && touch ~/.vnc/passwd
RUN x11vnc -storepasswd "vncdocker" ~/.vnc/passwd
EXPOSE 5900
CMD ["/usr/bin/x11vnc", "-forever", "-usepw", "-create"]
And build a docker image named vnc:
docker build -t vnc .
Run a container and remember map port 5900 to host for remote connect(I'm using --net=host here):
docker run -d --name=vnc --net=host vnc
Now you have a running container with x11vnc inside, download a vnc client like realvnc and try to connect to <server_ip>:5900 from local, the password is vncdocker which is set in Dockerfile, you'll come to the remote X screen with an xterm open. If you execute env and will find the environment variable DISPLAY=:20
Let's go to the docker container and try to open another xterm:
docker exec -it vnc bash
Then execute the following command inside container:
DISPLAY=:20 xterm
A new xterm window will popup in your vnc client window. I guess that's the way you are going to run your application.
Note:
The base vnc image is based on ubuntu 14, but I guess the package is similar in ubuntu 16
Don't expose 5900 if you don't want remote connection
Hope this can help :-)

Docker container with Blazegraph Triple Store not working possibly due to networking

I'm preparing a Docker image to teach my students the basics of Linked Data. I want them to actually prepare proper RDF and simulate the process of publishing it on the web as Linked Data, so I have prepared a Docker image comprising:
Triple Store: Blazegraph, listening to port 9999.
GRefine. I have copied an instance of Open Refine, with the RDF extension included. Listening to port 3333.
Linked Data Server: I have copied an instance of Jetty, with Pubby inside it. Listening to port 8080.
I have tested the three in my localhost (runing Ubuntu 14.04) and they work fine. This is the Dockerfile I'm using to build the image:
FROM ubuntu:14.04
MAINTAINER Mikel Egaña Aranguren <my.email#x.com>
RUN apt-get update && apt-get install -y openjdk-7-jre wget curl
RUN mkdir /LinkedDataServer
COPY google-refine-2.5 /LinkedDataServer/google-refine-2.5
COPY blazegraph /LinkedDataServer/blazegraph
COPY jetty /LinkedDataServer/jetty
EXPOSE 9999
EXPOSE 3333
EXPOSE 8080
WORKDIR /LinkedDataServer
CMD java -server -jar blazegraph/bigdata-bundled.jar
CMD google-refine-2.5/refine -i 0.0.0.0
WORKDIR /LinkedDataServer/jetty
CMD java -jar start.jar jetty.port=8080
I run the container and it does map the appropriate ports:
docker run -d -p 9999:9999 -p 3333:3333 -p 8080:8080 mikeleganaaranguren/linked-data-server:0.0.1
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a08709d23acb mikeleganaaranguren/linked-data-server:0.0.1 /bin/sh -c 'java -ja 5 seconds ago Up 4 seconds 0.0.0.0:3333->3333/tcp, 0.0.0.0:8080->8080/tcp, 0.0.0.0:9999->9999/tcp dreamy_engelbart
The triple store, for example, seems to be working. If I go to 127.0.0.1:9999, I can access the triple store:
However, if try to do anything (queries, upload data, ...), the triple store simply fails with an "ERROR: Could not contact server". Since the same setting works on the host, I assume I'm doing something wrong with Docker. I have tried with -P instead of mapping the ports, and with --net=host, but I get the same error.
PS: Jetty also fails in the same fashion, and GRefine is not even working.
You'll need to make sure to use the IP of the docker container to access the Blazegraph instance. Outside of the container, it will not be running on 127.0.0.1, but rather the IP assigned to the docker container.
You'll need to run something like
docker inspect --format '{{ .NetworkSettings.IPAddress }}' "CONTAINER ID"
Where CONTAINER ID is the value of your docker instance.

Resources