My final purpose is to run experiment from an Api.
the experiment come from :
https://github.com/mlflow/mlflow/tree/master/examples/tensorflow/tf2
but export the file in my custom git where I clone it, in the image below ->
I have 2 images in my docker compose :
tree project :
|_app/
| |_Dockerfile
|
|_mlflow/
| |_Dockerfile
|
|_docker-compose.yml
app/Dockerfile
FROM continuumio/anaconda3
ENV APP_HOME ./
WORKDIR ${APP_HOME}
RUN conda config --append channels conda-forge
RUN conda install --quiet --yes \
'mlflow' \
'psycopg2' \
'tensorflow'
RUN pip install pylint
RUN pwd;ls \
&& git clone https://github.com/MChrys/QuickSign.git
RUN pwd;ls \
&& cd QuickSign \
&& pwd;ls
COPY . .
#RUN conda install jupyter
#CMD jupyter notebook --ip=0.0.0.0 --port=8888 --allow-root --no-browser
CMD cd QuickSign && mlflow run .
mlflow/Dockerfile
FROM python:3.7.0
RUN pip install mlflow
RUN mkdir /mlflow/
CMD mlflow server \
--backend-store-uri /mlflow \
--host 0.0.0.0
docker-compose.yml
version: '3'
services:
notebook:
build:
context: ./app
ports:
- "8888:8888"
depends_on:
- mlflow
environment:
MLFLOW_TRACKING_URI: 'http://mlflow:5000'
mlflow:
build:
context: ./mlflow
expose:
- "5000"
ports:
- "5000:5000"
when I docker-compose up the image I obtain :
notebook_1_74059cdc20ce | response = requests.request(**kwargs)
notebook_1_74059cdc20ce | File "/opt/conda/lib/python3.7/site-packages/requests/api.py", line 60, in request
notebook_1_74059cdc20ce | return session.request(method=method, url=url, **kwargs)
notebook_1_74059cdc20ce | File "/opt/conda/lib/python3.7/site-packages/requests/sessions.py", line 533, in request
notebook_1_74059cdc20ce | resp = self.send(prep, **send_kwargs)
notebook_1_74059cdc20ce | File "/opt/conda/lib/python3.7/site-packages/requests/sessions.py", line 646, in send
notebook_1_74059cdc20ce | r = adapter.send(request, **kwargs)
notebook_1_74059cdc20ce | File "/opt/conda/lib/python3.7/site-packages/requests/adapters.py", line 516, in send
notebook_1_74059cdc20ce | raise ConnectionError(e, request=request)
notebook_1_74059cdc20ce | requests.exceptions.ConnectionError: HTTPConnectionPool(host='mlflow', port=5000): Max retries exceeded with url: /api/2.0/mlflow/runs/create (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fd5db4edc50>: Failed to establish a new connection: [Errno 111] Connection refused'))
The problem look like that I run a project which is not found in the server images, as I run it in the app image, but I don't know how figure it out I have to trigger the experiment from a futur flask app
It looks like the server is not reachable from the app. I assume docker-compose.yml is used by the project running in the app, so that is trying to contact the mlflow server at MLFLOW_TRACKING_URI: 'http://mlflow:5000'. Is http://mlflow:5000 a domain you have set up? Where is it supposed to be reachable from?
The problem came from docker for windows, I was unable to make working docker compose on it but there are no problem to build it when I run it on virtual machine with ubuntu.
Related
I'm trying to build a containerized application that will basically read from a storage service do some operations and write a "processed" file into the same storage service.
I tried using NFS as a storage service but hit a few bottlenecks which led me to Minio as a storage service, which works out very well for me since I'm already familiar with AWS S3.
So I spun up a Minio container and basically wrote a simple python script which will basically check for .csv files that have been uploaded in the last 60 seconds.
I tested this locally and it worked flawlessly.
This is the sample code -
import time
from minio import Minio
from minio.error import ResponseError
# Initialize the Minio client
host = "minio_server:9000"
access_key = "ROOTNAME"
secret_key="CHANGEME123"
client = Minio (host, access_key=access_key,
secret_key=secret_key, secure=False)
# Set the name of the bucket to monitor
bucket_name = 'my-bucket'
# Set the interval for checking the bucket (in seconds)
check_interval = 60
while True:
# Get the list of objects in the bucket
objects = client.list_objects(bucket_name, prefix='', recursive=True)
# Check the time of each object
for obj in objects:
# Check if the object is a CSV file
if obj.object_name.endswith('.csv'):
# Check if the object was uploaded in the last 60 seconds
current_time = time.time()
if current_time - obj.last_modified.timestamp() < check_interval:
print(f'Found new CSV file: {obj.object_name}')
# Sleep for the specified interval before checking again
time.sleep(check_interval)
And this works exactly as intended.
So next I containerized this application and created a Dockerfile
FROM ubuntu:20.04
RUN apt-get update -y
RUN apt update
RUN apt upgrade -y
RUN apt-get install -y python3
#RUN apt-get install -y python3-pip
RUN apt update
RUN apt upgrade -y
RUN apt-get install -y python3
RUN apt-get install -y python3-pip
RUN DEBIAN_FRONTEND=noninteractive TZ=Etc/UTC apt-get -y install tzdata
RUN apt-get install -y libenchant1c2a
RUN apt install git -y
RUN pip3 install \
argparse \
boto3 \
numpy==1.20.3 \
scipy \
pandas \
scikit-learn \
matplotlib \
plotly \
kaleido \
fpdf \
regex \
pyenchant \
minio \
pythonping\
openpyxl
RUN git clone https://github.com/nedap/dateinfer.git && \
cd dateinfer && \
pip3 install .
ADD core.py /core.py
CMD ["/core.py"]
ENTRYPOINT ["python3"]
Note- I need the other dependencies for the actual application that I'm going to write once this test passes.
I created a Docker-compose.yaml file to club the two containers.
version: "2.1"
services:
minio_server:
image: quay.io/minio/minio
networks:
- appnet
container_name: minio_server
restart: unless-stopped
privileged: true
command: server --address ":9000" --console-address ":9001" /data
environment:
- MINIO_ROOT_USER=ROOTNAME
- MINIO_ROOT_PASSWORD=CHANGEME123
volumes:
- /minio/data:/data
ports:
- 9090:9090
- 9000:9000
minio_core_server:
image: miniocore:latest
networks:
- appnet
depends_on:
- minio_server
networks:
appnet:
name: appnet
And obviously I Made some changes to the source code to do something super simple -
from minio import Minio
from minio.error import ResponseError
# Set the URL of the Minio server
minio_url = "minio_server:9000"
# Set the access key and secret key for the Minio server
access_key = "ROOTNAME"
secret_key = "CHANGEME123"
# Initialize the Minio client
client = Minio(minio_url,
access_key=access_key,
secret_key=secret_key,
secure=True)
# Set the name of the bucket
imput_bucket= client.make_bucket('sample.bucket')
But I'm getting the following error:
File "/usr/local/lib/python3.8/dist-packages/urllib3/util/retry.py", line 592, in increment
minio_core_server_1 | raise MaxRetryError(_pool, url, error or ResponseError(cause))
minio_core_server_1 | urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='minio_server', port=9000): Max retries exceeded with url: /sample.bucket (Caused by SSLError(SSLError(1, '[SSL: WRONG_VERSION_NUMBER] wrong version number (_ssl.c:1131)')))
Trouble shooting steps that I tried:
I initially thought that the service container is unable to establish a connection between the service container and the Minio container:
So I tried to ping the Minio container from the application from the service container
from minio import Minio
#from minio.error import ResponseError
# Set the URL of the Minio server
from pythonping import ping
ping("minio_server", verbose=True)
and this is the response -
minio_core_server_1 | Reply from 192.168.192.2, 29 bytes in 0.03ms
minio_core_server_1 | Reply from 192.168.192.2, 29 bytes in 0.01ms
minio_core_server_1 | Reply from 192.168.192.2, 29 bytes in 0.01ms
minio_core_server_1 | Reply from 192.168.192.2, 29 bytes in 0.01ms
SO it's in fact able to establish a connection with the Minio container
I am trying to run trac on one container and install MariaDB on another container, on the trac container when I am trying to initialize the environment using trac-admin /path/to/env initenv I am getting the following error during specifying the database connection string which is running on another container:
Database connection string [sqlite:db/trac.db]> mysql://root:password#X.X.X.X:3306/trac
Initenv for '/usr/local/trac1' failed.
Failed to create environment.
'NoneType' object has no attribute 'encoding'
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/trac/admin/console.py", line 431, in do_initenv
options=options)
File "/usr/local/lib/python2.7/dist-packages/trac/core.py", line 141, in __call__
self.__init__(*args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/trac/env.py", line 259, in __init__
self.create(options)
File "/usr/local/lib/python2.7/dist-packages/trac/env.py", line 566, in create
DatabaseManager(self).init_db()
File "/usr/local/lib/python2.7/dist-packages/trac/db/api.py", line 285, in init_db
connector.init_db(**args)
File "/usr/local/lib/python2.7/dist-packages/trac/db/mysql_backend.py", line 133, in init_db
params)
File "/usr/local/lib/python2.7/dist-packages/trac/db/mysql_backend.py", line 118, in get_connection
cnx = MySQLConnection(path, log, user, password, host, port, params)
File "/usr/local/lib/python2.7/dist-packages/trac/db/mysql_backend.py", line 413, in __init__
host=host, port=port, **opts)
File "/usr/local/lib/python2.7/dist-packages/pymysql/__init__.py", line 94, in Connect
return Connection(*args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/pymysql/connections.py", line 285, in __init__
self.encoding = charset_by_name(self.charset).encoding
AttributeError: 'NoneType' object has no attribute 'encoding'
Created and granted permissions to the trac database using the following commands:
MariaDB [(none)]> CREATE DATABASE trac DEFAULT CHARACTER SET utf8 COLLATE utf8_bin;
Query OK, 1 row affected (0.007 sec)
MariaDB [(none)]> GRANT ALL ON trac.* TO root#X.X.X.X IDENTIFIED BY 'password';
Query OK, 0 rows affected (0.026 sec)
Dockerfile for trac:
# This is a Dockerfile for trac
FROM quay.io/official-images/debian:bullseye-slim
LABEL description="Custom httpd container image for running trac"
RUN apt-get update \
&& apt-get install -y aptitude vim git wget gcc apache2 python2 python2-dev unzip
RUN apt-get install -y default-libmysqlclient-dev
RUN wget https://bootstrap.pypa.io/pip/2.7/get-pip.py \
&& python2 get-pip.py
RUN python2 -m pip install setuptools Jinja2 babel Pygments docutils pytz textile PyMySQL trac
EXPOSE 80 8000
ENV LogLevel "info"
ADD index.html /var/www/html
#RUN trac-admin /usr/local/trac initenv trac mysql://root:user#X.X.X.X:3306/trac
#RUN tracd --port 8000 /usr/local/trac &
ENTRYPOINT ["apachectl"]
CMD ["-D", "FOREGROUND"]
Dockerfile for MariaDB:
# This is a Dockerfile for MariaDB to be used with trac
FROM docker.io/mariadb:latest
LABEL description="Custom mariadb container image for running trac"
ARG MARIADB_USER=db_user
ARG MARIADB_ROOT_PASSWORD=db_pw
ARG MARIADB_DATABASE=db_name
ENV PACKAGES openssh-server openssh-client
ENV MARIADB_USER $MARIADB_USER
ENV MARIADB_ROOT_PASSWORD $MARIADB_ROOT_PASSWORD
ENV MARIADB_DATABASE $MARIADB_DATABASE
RUN mkdir /db_backup
ADD trac.sql /db_backup
RUN apt-get update && apt-get install -y $PACKAGES
EXPOSE 3306
CMD ["mysqld"]
I figured it out myself, while I was trying to create the database using the command specified in the trac documentation (mentioned in the question) I was getting the following character_set_database and collation_database:
MariaDB [trac]> SHOW VARIABLES WHERE variable_name IN ('character_set_database', 'collation_database');
+------------------------+-------------+
| Variable_name | Value |
+------------------------+-------------+
| character_set_database | utf8mb3 |
| collation_database | utf8mb3_bin |
+------------------------+-------------+
The following command should be used to create the database with the correct encodings:
MariaDB [(none)]> CREATE DATABASE trac CHARACTER SET utf8mb4 DEFAULT COLLATE utf8mb4_bin;
Query OK, 1 row affected (0.001 sec)
Results:
MariaDB [(none)]> use trac
Database changed
MariaDB [trac]> SHOW VARIABLES WHERE variable_name IN ('character_set_database', 'collation_database');
+------------------------+-------------+
| Variable_name | Value |
+------------------------+-------------+
| character_set_database | utf8mb4 |
| collation_database | utf8mb4_bin |
+------------------------+-------------+
2 rows in set (0.003 sec)
I can't start my cassandra container, I get the following error when cassandra container is starting:
/usr/bin/env: ‘python3\r’: No such file or directory
My Dockerfile:
FROM cassandra:3.11.6
RUN apt-get update && apt-get install -y apt-transport-https && apt-get install software-properties-common -y
COPY ["schema.cql", "wait-for-it.sh", "bootstrap-schema.py", "/"]
RUN chmod +x /bootstrap-schema.py /wait-for-it.sh
ENV BOOTSTRAPPED_SCHEMA_FILE_MARKER /bootstrapped-schema
ENV BOOTSTRAP_SCHEMA_ENTRYPOINT /bootstrap-schema.py
ENV OFFICIAL_ENTRYPOINT /docker-entrypoint.sh
# 7000: intra-node communication
# 7001: TLS intra-node communication
# 7199: JMX
# 9042: CQL
# 9160: thrift service
EXPOSE 7000 7001 7199 9042 9160
#Change entrypoint to custom script
COPY cassandra.yaml /etc/cassandra/cassandra.yaml
ENTRYPOINT ["/bootstrap-schema.py"]
CMD ["cassandra", "-Dcassandra.ignore_dc=true", "-Dcassandra.ignore_rack=true", "-f"]
I GOT THIS ERROR ONLY WHEN I ATTACH THIS LINE:
ENTRYPOINT ["/bootstrap-schema.py"]
I use Windows 10 (Docker for Windows installed).
What's wrong in this script: bootstrap-schema.py:
#!/usr/bin/env python3
import os
import sys
import subprocess
import signal
import logging
logger = logging.getLogger('bootstrap-schema')
logger.setLevel(logging.DEBUG)
ch = logging.StreamHandler()
ch.setLevel(logging.DEBUG)
logger.addHandler(ch)
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
ch.setFormatter(formatter)
proc_args = [os.environ['OFFICIAL_ENTRYPOINT']]
proc_args.extend(sys.argv[1:])
if (not os.path.exists(os.environ["BOOTSTRAPPED_SCHEMA_FILE_MARKER"])):
proc = subprocess.Popen(proc_args) # Run official entrypoint command as child process
wait_for_cql = os.system("/wait-for-it.sh -t 120 127.0.0.1:9042") # Wait for CQL (port 9042) to be ready
if (wait_for_cql != 0):
logger.error("CQL unavailable")
exit(1)
logger.debug("Schema creation")
cqlsh_ret = subprocess.run("cqlsh -f /schema.cql 127.0.0.1 9042", shell=True)
if (cqlsh_ret.returncode == 0):
# Terminate bg process
os.kill(proc.pid, signal.SIGTERM)
proc.wait(20)
# touch file marker
open(os.environ["BOOTSTRAPPED_SCHEMA_FILE_MARKER"], "w").close()
logger.debug("Schema created")
else:
logger.error("Schema creation error. {}".format(cqlsh_ret))
exit(1)
else:
logger.debug("Schema already exists")
os.execv(os.environ['OFFICIAL_ENTRYPOINT'], sys.argv[1:]) # Run official entrypoint
Thanks for any tip
EDIT
Of course I tried to add ex.
RUN apt-get install python3
OK, my fault - there was a well known problem - encoding. I had to encode windows files to Linux files - EACH file, also scripts, everything. Now works excellent:)
I have a Docker container running Bind9.
Inside the container named is running with bind user
bind 1 0 0 19:23 ? 00:00:00 /usr/sbin/named -u bind -g
In my named.conf.local I have
channel queries_log {
file "/var/log/bind/queries.log";
print-time yes;
print-category yes;
print-severity yes;
severity info;
};
category queries { queries_log; };
After starting the container, the log file is created
-rw-r--r-- 1 bind bind 0 Nov 14 19:23 queries.log
but it always remains empty.
On the other side, the 'queries' logs are still visible using docker logs ...
14-Nov-2018 19:26:10.463 client #0x7f179c10ece0 ...
Using the same config without Docker works fine.
My docker-compose.yml
version: '3.6'
services:
bind9:
build: .
image: bind9:1.9.11.3
container_name: bind9
ports:
- "53:53/udp"
- "53:53/tcp"
volumes:
- ./config/named.conf.options:/etc/bind/named.conf.options
- ./config/named.conf.local:/etc/bind/named.conf.local
My Dockerfile
FROM ubuntu:18.04
ENV BIND_USER=bind \
BIND_VERSION=1:9.11.3
RUN apt-get update -qq \
&& DEBIAN_FRONTEND=noninteractive apt-get --no-install-recommends install -y \
bind9=${BIND_VERSION}* \
bind9-host=${BIND_VERSION}* \
dnsutils \
&& rm -rf /var/lib/apt/lists/*
COPY entrypoint.sh /sbin/entrypoint.sh
RUN chmod 755 /sbin/entrypoint.sh
ENTRYPOINT ["/sbin/entrypoint.sh"]
CMD ["/usr/sbin/named"]
-f
Run the server in the foreground (i.e. do not daemonize).
-g
Run the server in the foreground and force all logging to stderr.
Try to use -f instead of -g.
Trying to install ckan from Dockerfile in:
Docker Community Edition
Version 17.06.2-ce-mac27 (19124)
Channel: stable
428bd6ceae*
FIRST ATTEMPT
These are the steps followed:
$ docker pull ckan/solr
$ docker pull ckan/ckan
$ docker pull ckan/postgresql
After downloaded the images I get:
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
ckan/solr latest 4acd7db7b1f7 3 days ago 517MB
ckan/ckan latest 77dd30c92740 3 days ago 642MB
ckan/postgresql latest 3c3ecd94ae7e 3 days ago 265MB
For each Dockerfile I made:
For solr
$ docker build . (for SOLR)
Step 7/9
ADD failed: stat /var/lib/docker/tmp/docker-builder234804113/solrconfig.xml: no such file or directory
For postgresql
$ docker build . (for POSTGRESQL)
Successfully built 8b296a6e3153
For ckan
$ docker build . -t ckan && docker run -d -p 80:5000 --link db:db --link redis:redis --link solr:solr ckan
Error in Step 7/26 :
Get:1 http://security.debian.org/ jessie/updates/main perl-base amd64 5.20.2-3+deb8u9 [1226 kB]
debconf: delaying package configuration, since apt-utils is not installed
And Fail in:
Step 12/26 : ADD ./requirements.txt $CKAN_HOME/src/ckan/requirements.txt
ADD failed: stat /var/lib/docker/tmp/docker-builder814560705/requirements.txt: no such file or directory
SECOND ATTEMPT
Removed images
Pull again images
$ docker pull ckan/solr
$ docker pull ckan/ckan
$ docker pull ckan/postgresql
And Launch:
$ docker build . -t ckan && docker run -d -p 80:5000 --link db:db --link redis:redis --link solr:solr ckan
for the Dockerfile in: https://hub.docker.com/r/ckan/ckan/~/dockerfile/
and I get the same error in the same step:
Step 12/26 : ADD ./requirements.txt $CKAN_HOME/src/ckan/requirements.txt
ADD failed: stat /var/lib/docker/tmp/docker-builder254648764/requirements.txt: no such file or directory
but the log in Docker Hub is correct.
Am I doing something wrong?????
Dockerfile
FROM debian:jessie
MAINTAINER Open Knowledge
ENV CKAN_HOME /usr/lib/ckan/default
ENV CKAN_CONFIG /etc/ckan/default
ENV CKAN_STORAGE_PATH /var/lib/ckan
ENV CKAN_SITE_URL http://localhost:5000
# Install required packages
RUN apt-get -q -y update && apt-get -q -y upgrade && DEBIAN_FRONTEND=noninteractive apt-get -q -y install \
python-dev \
python-pip \
python-virtualenv \
libpq-dev \
git-core \
build-essential \
libssl-dev \
libffi-dev \
&& apt-get -q clean
# SetUp Virtual Environment CKAN
RUN mkdir -p $CKAN_HOME $CKAN_CONFIG $CKAN_STORAGE_PATH
RUN virtualenv $CKAN_HOME
RUN ln -s $CKAN_HOME/bin/pip /usr/local/bin/ckan-pip
RUN ln -s $CKAN_HOME/bin/paster /usr/local/bin/ckan-paster
# SetUp Requirements
ADD ./requirements.txt $CKAN_HOME/src/ckan/requirements.txt
RUN ckan-pip install --upgrade -r $CKAN_HOME/src/ckan/requirements.txt
# TMP-BUGFIX https://github.com/ckan/ckan/issues/3388
ADD ./dev-requirements.txt $CKAN_HOME/src/ckan/dev-requirements.txt
RUN ckan-pip install --upgrade -r $CKAN_HOME/src/ckan/dev-requirements.txt
# TMP-BUGFIX https://github.com/ckan/ckan/issues/3594
RUN ckan-pip install --upgrade urllib3
# SetUp CKAN
ADD . $CKAN_HOME/src/ckan/
RUN ckan-pip install -e $CKAN_HOME/src/ckan/
RUN ln -s $CKAN_HOME/src/ckan/ckan/config/who.ini $CKAN_CONFIG/who.ini
# SetUp EntryPoint
COPY ./contrib/docker/ckan-entrypoint.sh /
RUN chmod +x /ckan-entrypoint.sh
ENTRYPOINT ["/ckan-entrypoint.sh"]
# Volumes
VOLUME ["/etc/ckan/default"]
VOLUME ["/var/lib/ckan"]
EXPOSE 5000
CMD ["ckan-paster","serve","/etc/ckan/default/ckan.ini"]
UPDATE 1
As suggested, added the content of the ckan package from:
http://packaging.ckan.org/
Copied to my Dockerfile directory /data/usr/lib/ckan/default/src/ckan/*
New error in step 15 (see comments)
UPDATE 2
Cloned ckan from: https://github.com/ckan/ckan
$ docker build . -t ckan && docker run -d -p 80:5000 --link db:db --link redis:redis --link solr:solr ckan
Result:
Step 26/26 : CMD ckan-paster serve /etc/ckan/default/ckan.ini
---> Running in eeeb6ace6ee1
---> 3cd87cf4a1af
Removing intermediate container eeeb6ace6ee1
Successfully built 3cd87cf4a1af
Successfully tagged ckan:latest
docker: Error response from daemon: could not get container for db: No such container: db.
Trying to access to ckan:
docker run -it ckan bash
I get this error:
Distribution already installed:
ckan 2.8.0a0 from /usr/lib/ckan/default/src/ckan
Creating /etc/ckan/default/ckan.ini
Now you should edit the config files
/etc/ckan/default/ckan.ini
Traceback (most recent call last):
File "/usr/local/bin/ckan-paster", line 11, in <module>
sys.exit(run())
File "/usr/lib/ckan/default/local/lib/python2.7/site-packages/paste/script/command.py", line 102, in run
invoke(command, command_name, options, args[1:])
File "/usr/lib/ckan/default/local/lib/python2.7/site-packages/paste/script/command.py", line 141, in invoke
exit_code = runner.run(args)
File "/usr/lib/ckan/default/local/lib/python2.7/site-packages/paste/script/command.py", line 236, in run
result = self.command()
File "/usr/lib/ckan/default/src/ckan/ckan/lib/cli.py", line 348, in command
self._load_config(cmd!='upgrade')
File "/usr/lib/ckan/default/src/ckan/ckan/lib/cli.py", line 321, in _load_config
self.site_user = load_config(self.options.config, load_site_user)
File "/usr/lib/ckan/default/src/ckan/ckan/lib/cli.py", line 230, in load_config
load_environment(conf.global_conf, conf.local_conf)
File "/usr/lib/ckan/default/src/ckan/ckan/config/environment.py", line 111, in load_environment
p.load_all()
File "/usr/lib/ckan/default/src/ckan/ckan/plugins/core.py", line 129, in load_all
unload_all()
File "/usr/lib/ckan/default/src/ckan/ckan/plugins/core.py", line 182, in unload_all
unload(*reversed(_PLUGINS))
File "/usr/lib/ckan/default/src/ckan/ckan/plugins/core.py", line 210, in unload
plugins_update()
File "/usr/lib/ckan/default/src/ckan/ckan/plugins/core.py", line 121, in plugins_update
environment.update_config()
File "/usr/lib/ckan/default/src/ckan/ckan/config/environment.py", line 289, in update_config
engine = sqlalchemy.engine_from_config(config, client_encoding='utf8')
File "/usr/lib/ckan/default/local/lib/python2.7/site-packages/sqlalchemy/engine/__init__.py", line 428, in engine_from_config
return create_engine(url, **options)
File "/usr/lib/ckan/default/local/lib/python2.7/site-packages/sqlalchemy/engine/__init__.py", line 387, in create_engine
return strategy.create(*args, **kwargs)
File "/usr/lib/ckan/default/local/lib/python2.7/site-packages/sqlalchemy/engine/strategies.py", line 50, in create
u = url.make_url(name_or_url)
File "/usr/lib/ckan/default/local/lib/python2.7/site-packages/sqlalchemy/engine/url.py", line 194, in make_url
return _parse_rfc1738_args(name_or_url)
File "/usr/lib/ckan/default/local/lib/python2.7/site-packages/sqlalchemy/engine/url.py", line 240, in _parse_rfc1738_args
return URL(name, **components)
File "/usr/lib/ckan/default/local/lib/python2.7/site-packages/sqlalchemy/engine/url.py", line 60, in __init__
self.port = int(port)
ValueError: invalid literal for int() with base 10: ''
UPDATE 3 (matt fullerton suggestion)
I have cloned https://github.com/parksandwildlife/ckan.git in a folder called ckan
I have cloned https://github.com/parksandwildlife/ckan/tree/3649-docker-upgrade in another folder called ckan-3649
In ckan folder
docker build . -t ckan && docker run -d -p 80:5000 --link db:db --link redis:redis --link solr:solr ckan
In Step 15/26 :
RUN ckan-pip install --upgrade -r $CKAN_HOME/src/ckan/dev-requirements.txt
ERROR:
Cleaning up...
Command /usr/lib/ckan/default/bin/python2 -c "import setuptools, tokenize;__file__='/tmp/pip-build-BLM6DJ/cryptography/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /tmp/pip-Vynm83-record/install-record.txt --single-version-externally-managed --compile --install-headers /usr/lib/ckan/default/include/site/python2.7 failed with error code 1 in /tmp/pip-build-BLM6DJ/cryptography
Storing debug log for failure in /root/.pip/pip.log
The command '/bin/sh -c ckan-pip install --upgrade -r $CKAN_HOME/src/ckan/dev-requirements.txt' returned a non-zero code: 1
In ckan-3649/contrib/docker "docker-compose up"
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
b6d7a5a595ca redis:latest "docker-entrypoint..." 9 minutes ago Up 9 minutes 6379/tcp redis
4551ccea28f4 ckan/solr:latest "docker-entrypoint..." 9 minutes ago Up 9 minutes 8983/tcp solr
86cea84b6ab5 docker_db "docker-entrypoint..." 9 minutes ago Up 9 minutes 5432/tcp db
fca64f1bee5a clementmouchet/datapusher "python datapusher..." 9 minutes ago Up 9 minutes 0.0.0.0:8800->8800/tcp datapusher
But I think ckan image is missing....
There is a major upgrade of the docker functionality in CKAN underway. I would suggest cloning CKAN from here:
https://github.com/parksandwildlife/ckan
git clone https://github.com/parksandwildlife/ckan.git
Then checkout the origin/3649-docker-upgrade branch
(https://github.com/parksandwildlife/ckan/tree/3649-docker-upgrade)
git checkout origin/3649-docker-upgrade
Then use docker-compose in the contrib/docker folder:
docker-compose up
This should assemble Solr, Postgres etc. for you.
Comments on mileage with this at https://github.com/ckan/ckan/pull/3692 will also be appreciated.