create a web2py docker image and access it through browser - docker

I'm trying to build a docker image of web2py on top of ubuntu. Given the docker file
#######################
# Web2py installation #
#######################
# Set the base image for this installation
FROM ubuntu
# File Author/ Mainteainer
MAINTAINER sandilya28
#Update the repository sources list
RUN apt-get update --assume-yes
########### BEGIN INSTALLATION #############
## Install Git first
RUN apt-get install git-core --assume-yes && \
cd /home/ && \
git clone --recursive https://github.com/web2py/web2py.git
## Install Python
RUN sudo apt-get install python --assume-yes
########## END INSTALLATION ################
# Expose the default port
EXPOSE 8000
WORKDIR /home/
By building an image using the above Dockerfile
docker build -t sandilya28/web2py .
Then by building a container using the above image
docker run --name my_web2py -p 8000:8000 -it sandilya28/web2py bash
The ip address of the host is
192.168.59.103
which can be found by using boot2docker ip
After creating the image I'm starting the web2py sever using
python web2py/web2py.py
and I'm trying to access the web2py GUI from 192.168.59.103:8000 but it is showing the page is not available.
How to access the GUI of web2py from the browser.

Creating a docker that runs the development webserver will leave you with a very slow solution as the webserver is single threaded and will also serve all static files. It's meant for development.
As you don't use https it will also disable the web2py admin interface: that's only available over http if you access it from localhost.
That being said, you can get your solution up and running by starting web2py with:
python web2py.py --nogui -a admin -i 0.0.0.0
All options are important as web2py needs to start the server without asking any questions and it needs to bind to external netwerk interface address.
When you want to use a production ready docker to run web2py you would need some additional components in your docker; nginx, uwsgi and supervisord would make it a lot faster and give you the options to enable https. Note: for bigger projects you would probably need python binding for MySql or PostgreSQL and a separate docker with the database.
An production example, without fancy DB support, can be found here:
https://github.com/acidjunk/docker-web2py
It can be installed from the docker hub with:
docker pulll acidjunk/web2py
Make sure to read the instructions as you'll need a web2py app; that will be mounted in the container. If you just want to start a web2py server to fiddle around with the example or welcome app you can use:
docker pull thehipbot/web2py
Start it with:
docker run -p 443:443 -p 80:80 thehipbot/web2py
Then fire up a browser to
https://192.168.59.103

Look at the example app I created on github:
Main features:
Stripped down version of the base w2p app
Dev mode friendly (admin console)
Served by Gunicorn (optimized for use in Docker containers)
Naked URL i.e. http://localhost:8080, no extra URL paths
Dockerfile + k8s
Hope this helps.

Related

How to execute command from one docker container to another

I'm creating an application that will allow users to upload video files that will then be put through some processing.
I have two containers.
Nginx container that serves the website where users can upload their video files.
Video processing container that has FFmpeg and some other processing stuff installed.
What I want to achieve. I need container 1 to be able to run a bash script on container 2.
One possibility as far as I can see is to make them communicate over HTTP via an API. But then I would need to install a web server in container 2 and write an API which seems a bit overkill.
I just want to execute a bash script.
Any suggestions?
You have a few options, but the first 2 that come time mind are:
In container 1, install the Docker CLI and bind mount
/var/run/docker.sock (you need to specify the bind mount from the
host when you start the container). Then, inside the container, you
should be able to use docker commands against the bind mounted
socket as if you were executing them from the host (you might also
need to chmod the socket inside the container to allow a non-root
user to do this.
You could install SSHD on container 2, and then ssh in from container 1 and run your script. The advantage here is that you don't need to make any changes inside the containers to account for the fact that they are running in Docker and not bare metal. The down side is that you will need to add the SSHD setup to your Dockerfile or the startup scripts.
Most of the other ideas I can think of are just variants of option (2), with SSHD replaced by some other tool.
Also be aware that Docker networking is a little strange (at least on Mac hosts), so you need to make sure that the containers are using the same docker-network and are able to communicate over it.
Warning:
To be completely clear, do not use option 1 outside of a lab or very controlled dev environment. It is taking a secure socket that has full authority over the Docker runtime on the host, and granting unchecked access to it from a container. Doing that makes it trivially easy to break out of the Docker sandbox and compromise the host system. About the only place I would consider it acceptable is as part of a full stack integration test setup that will only be run adhoc by a developer. It's a hack that can be a useful shortcut in some very specific situations but the drawbacks cannot be overstated.
I wrote a python package especially for this use-case.
Flask-Shell2HTTP is a Flask-extension to convert a command line tool into a RESTful API with mere 5 lines of code.
Example Code:
from flask import Flask
from flask_executor import Executor
from flask_shell2http import Shell2HTTP
app = Flask(__name__)
executor = Executor(app)
shell2http = Shell2HTTP(app=app, executor=executor, base_url_prefix="/commands/")
shell2http.register_command(endpoint="saythis", command_name="echo")
shell2http.register_command(endpoint="run", command_name="./myscript")
can be called easily like,
$ curl -X POST -H 'Content-Type: application/json' -d '{"args": ["Hello", "World!"]}' http://localhost:4000/commands/saythis
You can use this to create RESTful micro-services that can execute pre-defined shell commands/scripts with dynamic arguments asynchronously and fetch result.
It supports file upload, callback fn, reactive programming and more. I recommend you to checkout the Examples.
Running a docker command from a container is not straightforward and not really a good idea (in my opinion), because :
You'll need to install docker on the container (and do docker in docker stuff)
You'll need to share the unix socket, which is not a good thing if you have no idea of what you're doing.
So, this leaves us two solutions :
Install ssh on you're container and execute the command through ssh
Share a volume and have a process that watch for something to trigger your batch
It was mentioned here before, but a reasonable, semi-hacky option is to install SSH in both containers and then use ssh to execute commands on the other container:
# install SSH, if you don't have it already
sudo apt install openssh-server
# start the ssh service
sudo service start ssh
# start the daemon
sudo /usr/sbin/sshd -D &
Assuming you don't want to always be root, you can add default user (in this case, 'foobob'):
useradd -m --no-log-init --system --uid 1000 foobob -s /bin/bash -g sudo -G root
#change password
echo 'foobob:foobob' | chpasswd
Do this on both the source and target containers. Now you can execute a command from container_1 to container_2.
# obtain container-id of target container using 'docker ps'
ssh foobob#<container-id> << "EOL"
echo 'hello bob from container 1' > message.txt
EOL
You can automate the password with ssh-agent, or you can use some bit of more hacky with sshpass (install it first using sudo apt install sshpass):
sshpass -p 'foobob' ssh foobob#<container-id>
I believe
docker exec -it <container_name> <command>
should work, even inside the container.
You could also try to mount to docker.sock in the container you try to execute the command from:
docker run -v /var/run/docker.sock:/var/run/docker.sock ...

Configure Docker with proxy per host/url

I use Docker Toolbox on Windows 7 in a corporate environment. My workflow requires pulling containers from one artifactory and pushing them to a different one (eg. external and internal). Each artifactory requires a different proxy to access it. Is there a way to configure Docker daemon to select proxy based on a URL? Or, if not, what else can I do to make this work?
Since, as Pierre B. mentioned, Docker daemon does not support URL-based proxy selection, the solution is to point it to a local proxy configured to select the proper upstream proxy based on the URL.
While any HTTP[S] proxy capable of upstream selection would do, (pac4cli project being particularly interesting for it's advertised capability to select the upstream based on proxy-auto-discovery protocol used by most Web browsers a in corporate setting), I've chosen to use tinyproxy, as more mature and light-weight solution. Furthermore, I've decided to run my proxy inside the docker-machine VM in order to simplify it's deployment and make sure the proxy is always running when the Docker daemon needs it.
Below are the steps I used to set up my system. I'm especially grateful to phoenix for providing steps to set up Docker Toolbox on Windows behind a corporate proxy, and will borrow heavily from that answer.
From this point on I will assume either Docker Quickstart Terminal or GitBash, with docker in the PATH, as your command line console and that "username" is your Windows user name.
Step 1: Build tinyproxy on your target platform
Begin by pulling a clean Linux distribution, I used CentOS, and run bash inside it:
docker run -it --name=centos centos bash
Next, install the tools we'll need:
yum install -y make gcc
After that we pull the latest release of Tinyproxy from it's GitHub repository and extract it inside root's home directory (at the time of this writing the latest release was 1.10.0):
cd
curl -L https://github.com/tinyproxy/tinyproxy/releases/download/1.10.0/tinyproxy-1.10.0.tar.gz \
| tar -xz
cd tinyproxy-1.10.0
Now let's configure and build it:
./configure --enable-upstream \
--disable-filter\
--disable-reverse\
--disable-transparent\
--disable-xtinyproxy
make
While --enable-upstream is obviously required, disabling other default features is optional but a good practice. To make sure it actually works run:
./src/tinyproxy -h
You should see something like:
Usage: tinyproxy [options]
Options are:
-d Do not daemonize (run in foreground).
-c FILE Use an alternate configuration file.
-h Display this usage information.
-v Display version information.
Features compiled in:
Upstream proxy support
For support and bug reporting instructions, please visit
<https://tinyproxy.github.io/>.
We exit the container by pressing Ctrl+D and copy the executable to a special folder location accessible from the docker-machine VM:
docker cp centos://root/tinyproxy-1.10.0/src/tinyproxy \
/c/Users/username/tinyproxy
Substitute "username" with your Windows user name. Please note that double slash — // before "root" is required to disable MINGW path conversion.
Now we can delete the container:
docker rm centos
Step 2: Point docker daemon to a local proxy port
Choose a TCP port number to run the proxy on. This can be any port that is not in use on the docker-machine VM. I will use number 8618 in this example.
First, let's delete the existing default Docker VM:
WARNING: This will permanently erase all currently stored containers and images
docker-machine rm -f default
Next, we re-create the default machine setting HTTP_PROXY and HTTPS_PROXY environment variables to the local host and the port we selected, and then refresh our shell environment:
docker-machine create default \
--engine-env HTTP_PROXY=http://localhost:8618 \
--engine-env HTTPS_PROXY=http://localhost:8618
eval $(docker-machine env)
Optionally, we could also set NO_PROXY environment variable to list hosts and/or wildcards (separated by ;) to which the daemon should connect directly, bypassing the proxy.
Step 3: Set up tinyproxy inside docker-machine VM
First, we will create two files in the /c/Users/username directory (this is where our tinyproxy binary should reside after Step 1 above) and then we'll copy them to the VM.
The first file is tinyproxy.conf, the exact syntax is documented on the Tinyproxy website, but the example below should have all the settings need:
# These settings can be customized to your liking,
# the port though must be the same we used in Step 2
listen 127.0.0.1
port 8618
user nobody
group nogroup
loglevel critical
syslog on
maxclients 50
startservers 2
minspareServers 2
maxspareServers 5
disableviaheader yes
# Here is the actual proxy selection, rules apply from top
# to bottom, and the last one is the default. More info on:
# https://tinyproxy.github.io/
upstream http proxy1.corp.example.com:80 ".foo.example.com"
upstream http proxy2.corp.example.com:80 ".bar.example.com"
upstream http proxy.corp.example.com:82
In the example above:
http://proxy1.corp.example.com:80 will be used to connect to URLs that end with "foo.example.com", such as http://www.foo.example.com
http://proxy2.corp.example.com:80 will be used to connect to URLs that end with "bar.example.com", such as http://www.bar.example.com, and
http://proxy.corp.example.com:80 will be used to connect all other URLs
It is also possible to match exact host names, IP addresses, subnets and hosts without domains.
The second file is as the shell script that will launch the proxy, its name must be bootlocal.sh:
#! /bin/sh
# Terminate on error
set -e
# Switch to the script directory
cd $(dirname $0)
# Launch proxy server
./tinyproxy -c tinyproxy.conf
Now, let's connect to the docker VM, get root, and switch to boot2docker directory:
docker-machine ssh
sudo -s
cd /var/lib/boot2docker
Next, we'll copy all three files over and a set their permissions:
cp /c/Users/username/boot2docker/{tinyproxy{,.conf},bootlocal.sh} .
chmod 755 tinyproxy bootlocal.sh
chmod 644 tinyproxy.conf
Exit VM session by pressing Ctrl+D twice and restart it:
docker-machine restart default
That's it! Now docker should be able pull and push images from different URLs automatically selecting the right proxy server.

Superset for Clickhouse in docker with SQLAlchemy

I'm trying to setup Apache Superset for Clickhouse.
My understanding so far is that I need to install SQLAlchemy for Clickhouse
https://github.com/xzkostyan/clickhouse-sqlalchemy
I'm in Ubuntu 16.04 LTS, and using the Docker vanilla version of Clickhouse and of Superset:
https://store.docker.com/community/images/yandex/clickhouse-server
https://hub.docker.com/r/amancevice/superset/
without special settings
Any idea how I can bridge the two docker containers with clickhouse-sqlalchemy ?
Where and how in that case to install that?
(if you have sample command line that I can reuse that will be great)
You don't need to bridge them: what you want is a superset server (that you happen to be running via docker) to connect to a clickhouse database (that you also happen to be running via docker).
You also shouldn't need to install SQLAlchemy for Clickhouse: looking at the dockerfile at https://hub.docker.com/r/amancevice/superset/~/dockerfile/ that image has already sqlalchemy-clickhouse installed for you.
Your steps should be as follow:
When you docker run --detach --name superset [options] amancevice/superset you should have your superset instance running at http://localhost:8088/
Similarly, when you run $ docker run -d --name some-clickhouse-server --ulimit nofile=262144:262144 -v /path/to/your/config.xml:/etc/clickhouse-server/config.xml yandex/clickhouse-server you should end-up with a clickhouse instance that you can access via SQLAlchemy at something like clickhouse://default:#some-clickhouse-server/test
You'd need to modify that connection URI based on your config.xml - and you should be able to double-check that it works by connecting to it in your python console.
You should then be able to connect superset to your clickhouse db in the same way you'd connect to any other DB: by navigating into Superset's menu > Sources > Databases > [new]
Consider using already prepared and configured docker-compose.yml which included in Apache Superset (see https://github.com/apache/superset/blob/master/docker-compose.yml).
To work with Clickhouse should be installed sqlalchemy driver. There are two ones:
clickhouse-sqlalchemy by xzkostyan
sqlalchemy-clickhouse by cloudflare.
I recommend using clickhouse-sqlalchemy because it is actually supported and evolute, it supports both available protocols to interact with ClickHouse - HTTP and TCP (native protocol).
Let's connect to one of the public ClickHouse:
either Demo Yandex CH
docker run -it --rm yandex/clickhouse-client:latest \
--host gh-api.clickhouse.tech --user explorer -s
or Demo Altinity.Cloud CH
docker run -it --rm yandex/clickhouse-client:latest \
--host github.demo.trial.altinity.cloud -s --user demo --password demo
download source code from repo https://github.com/apache/superset
execute the commands
cd superset-master
docker-compose up
# open the new terminal
docker-compose exec superset bash /app/docker/docker-init.sh
docker-compose exec superset pip install clickhouse-sqlalchemy
docker-compose restart
wait for containers to be started and the web app to be built (see the console output, webpack should finish its work)
browse URL http://localhost:8088 (use credentials admin / admin)
add the database using one of the connection string:
# connection string for Demo Yandex ClickHouse
clickhouse+native://explorer#gh-api.clickhouse.tech/default?secure=true
# connection string for Demo Altinity.Cloud CH
clickhouse+native://demo:demo#github.demo.trial.altinity.cloud/default?secure=true
See also https://stackoverflow.com/a/66006784/303298.

Add xserver into Docker container (the host is headless)

I'm building a Docker container which have maven and some dependencies. Then it execute a script inside the container. It seems, one of that dependencies needs an Xserver to work. Nothing is shown on screen but it seems necessary and can't be avoided.
I got it working putting an ENV DISPLAY=x.x.x.x:0 on Dockerfile and it connects to the external Xserver and it works. But the point is to make a Docker self-sufficient container.
So I need to add a Xserver to my container adding in Dockerfile the necessary. And I want that Xserver only accessible by the Docker container itself and not externally.
The FROM of my Dockerfile is FROM ubuntu:15.04 and that is unchangeable because my Dockerfile have a lot of things depending of that specific version.
I've read some post about how to connect from docker container to Xserver of the Docker host machine, like this. But as I put in question's title, the Docker host is headless and doesn't have Xserver.
Which would be the minimum apt-get packages to install into the container to have a Xserver?
I guess in my Dockerfile will be needed the display environment var like ENV DISPLAY=:0. Is this correct?
Is anything else needed to be added in docker run command?
Thank you.
You can install and run a x11vnc inside your docker container. I'll show you how to make it running on a headless host and connect it remotely to run X applications(e.g. xterm).
Dockerfile:
FROM joprovost/docker-x11vnc
RUN mkdir ~/.vnc && touch ~/.vnc/passwd
RUN x11vnc -storepasswd "vncdocker" ~/.vnc/passwd
EXPOSE 5900
CMD ["/usr/bin/x11vnc", "-forever", "-usepw", "-create"]
And build a docker image named vnc:
docker build -t vnc .
Run a container and remember map port 5900 to host for remote connect(I'm using --net=host here):
docker run -d --name=vnc --net=host vnc
Now you have a running container with x11vnc inside, download a vnc client like realvnc and try to connect to <server_ip>:5900 from local, the password is vncdocker which is set in Dockerfile, you'll come to the remote X screen with an xterm open. If you execute env and will find the environment variable DISPLAY=:20
Let's go to the docker container and try to open another xterm:
docker exec -it vnc bash
Then execute the following command inside container:
DISPLAY=:20 xterm
A new xterm window will popup in your vnc client window. I guess that's the way you are going to run your application.
Note:
The base vnc image is based on ubuntu 14, but I guess the package is similar in ubuntu 16
Don't expose 5900 if you don't want remote connection
Hope this can help :-)

Connection refused on docker container

I'm new to Docker and trying to make a demo Rails app. I made a dockerfile that looks like this:
FROM ruby:2.2
MAINTAINER marko#codeship.com
# Install apt based dependencies required to run Rails as
# well as RubyGems. As the Ruby image itself is based on a
# Debian image, we use apt-get to install those.
RUN apt-get update && apt-get install -y \
build-essential \
nodejs
# Configure the main working directory. This is the base
# directory used in any further RUN, COPY, and ENTRYPOINT
# commands.
RUN mkdir -p /app
WORKDIR /app
# Copy the Gemfile as well as the Gemfile.lock and install
# the RubyGems. This is a separate step so the dependencies
# will be cached unless changes to one of those two files
# are made.
COPY Gemfile Gemfile.lock ./
RUN gem install bundler && bundle install --jobs 20 --retry 5
# Copy the main application.
COPY . ./
# Expose port 8080 to the Docker host, so we can access it
# from the outside.
EXPOSE 8080
# The main command to run when the container starts. Also
# tell the Rails dev server to bind to all interfaces by
# default.
CMD ["bundle", "exec", "rails", "server", "-b", "0.0.0.0", "-p", "8080"]
I then built it like so:
docker build -t demo .
And call a command to start the server which does start the server on port 8080:
Johns-MacBook-Pro:demo johnkealy$ docker run -it demo
=> Booting WEBrick
=> Rails 4.2.5 application starting in development on http://0.0.0.0:8080
=> Run `rails server -h` for more startup options
=> Ctrl-C to shutdown server
[2016-04-23 16:50:34] INFO WEBrick 1.3.1
[2016-04-23 16:50:34] INFO ruby 2.2.4 (2015-12-16) [x86_64-linux]
[2016-04-23 16:50:34] INFO WEBrick::HTTPServer#start: pid=1 port=8080
I then try to find the correct IP to navigate to:
Johns-MacBook-Pro:demo johnkealy$ docker-machine ip default
192.168.99.100
I navigate to http://192.168.99.100:8080 and get the error This site can’t be reached 192.168.99.100 refused to connect.
What could I be doing wrong ?
You need to publish the exposed ports by using the following options:
-P (upper case) or --publish-all that will tell Docker to use random ports from your host and map them to the exposed container's ports.
-p (lower case) or --publish=[] that will tell Docker to use ports you manually set and map them to the exposed container's ports.
The second option is preferred because you already know which ports are mapped. If you use the first option then you will need to call docker inspect demo and check which random ports are being used from your host at the Ports section.
Just run the following command:
docker run -it -p 8080:8080 demo
After that your url will work.
If you are using Docker toolkit on window 10 home you will need to access the webpage through docker-machine ip command. It is generally 192.168.99.100:
It is assumed that you are running with publish command like below.
docker run -it -p 8080:8080 demo
With Window 10 pro version you can access with localhost or corresponding loopback 127.0.0.1:8080 etc (Tomcat or whatever you wish). This is because you don't have a virtual box there and docker is running directly on Window Hyper V and loopback is directly accessible.
Verify the hosts file in window for any digression. It should have
127.0.0.1 mapped to localhost
I had the same problem. I was using Docker Toolbox on Windows Home.
Instead of localhost I had to use http://192.168.99.100:8080/.
You can get the correct IP address using the command:
docker-machine ip
The above command returned 192.168.99.100 for me.
Command EXPOSE in your Dockerfile lets you bind container's port to some port on the host machine but it doesn't do anything else.
When running container, to bind ports specify -p option.
So let's say you expose port 5000. After building the image when you run the container, run docker run -p 5000:5000 name. This binds container's port 5000 to your laptop/computers port 5000 and that portforwarding lets container to receive outside requests.
This should do it.
In Docker Quickstart Terminal run following command:
$ docker-machine ip 192.168.99.100
In Windows, you also normally need to run command line as administrator.
As standard-user:
docker build -t myimage -f Dockerfile .
Sending build context to Docker daemon 106.8MB
Step 1/1 : FROM mcr.microsoft.com/dotnet/core/runtime:3.0
Get https://mcr.microsoft.com/v2/: dial tcp: lookup mcr.microsoft.com on [::1]:53: read udp [::1]:45540->[::1]:53: read:
>>>connection refused
But as an administrator.
docker build -t myimage -f Dockerfile .
Sending build context to Docker daemon 106.8MB
Step 1/1 : FROM mcr.microsoft.com/dotnet/core/runtime:3.0
3.0: Pulling from dotnet/core/runtime
68ced04f60ab: Pull complete e936bd534ffb: Pull complete caf64655bcbb: Pull complete d1927dbcbcab: Pull complete Digest: sha256:e0c67764f530a9cad29a09816614c0129af8fe3bd550eeb4e44cdaddf8f5aa40
Status: Downloaded newer image for mcr.microsoft.com/dotnet/core/runtime:3.0
---> f059cd71a22a
Successfully built f059cd71a22a
Successfully tagged myimage:latest
Make sure that you use the -p flag before the image name like this:
docker run -p 8080:8080 demo

Resources