cannot connect to redis container from app container - docker

I follow the django channels tutorial to build a simple chat app.
https://channels.readthedocs.io/en/latest/tutorial/part_1.html
,
it can work on my local machine with redis without docker.
then now i want to put it into docker containers by docker compose file, but seems app cannot connect to redis container, i already try and google for two days, but seems all methods cannot work.
so want to ask for help here
folder structure
my_project
- mysite(django app)
- ... somefolder and files
- docker-compose.yml
- Dockfile
- requirements.txt
docker-compose.yml
version: '3'
services:
app:
build:
# current directory
context: .
ports:
#host to image
- "8000:8000"
volumes:
# map directory to image, which means if something changed in
# current directory, it will automatically reflect on image,
# don't need to restart docker to get the changes into effect
- ./mysite:/mysiteRoot/mysite
command: >
sh -c "
python3 manage.py makemigrations &&
python3 manage.py migrate &&
python3 manage.py runserver 0.0.0.0:8000"
depends_on:
- redis
redis:
image: redis:5.0.5-alpine
ports:
#host to image
- "6379:6379"
Dockfile
FROM python:3.7-alpine
MAINTAINER Aaron Wei.
ENV PYTHONUNBUFFERED 1
EXPOSE 8000
# Setup directory structure
RUN mkdir /mysiteRoot
WORKDIR /mysiteRoot/mysite/
# Install dependencies
COPY ./requirements.txt /mysiteRoot/requirements.txt
RUN apk add --update --no-cache postgresql-client
RUN apk add --update --no-cache --virtual .tmp-build-deps \
gcc libc-dev linux-headers postgresql-dev
RUN apk add build-base python-dev py-pip jpeg-dev zlib-dev
ENV LIBRARY_PATH=/lib:/usr/lib
RUN pip install -r /mysiteRoot/requirements.txt
RUN apk del .tmp-build-deps
# Copy application
COPY ./mysite/ /mysiteRoot/mysite/
RUN adduser -D user
USER user
Django settings file
"""
Django settings for mysite project.
Generated by 'django-admin startproject' using Django 2.2.2.
For more information on this file, see
https://docs.djangoproject.com/en/2.2/topics/settings/
For the full list of settings and their values, see
https://docs.djangoproject.com/en/2.2/ref/settings/
"""
import os
# Build paths inside the project like this: os.path.join(BASE_DIR, ...)
BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
# Quick-start development settings - unsuitable for production
# See https://docs.djangoproject.com/en/2.2/howto/deployment/checklist/
# SECURITY WARNING: keep the secret key used in production secret!
SECRET_KEY = 's0d^&2#s^126#6dsm7u4-t9pg03)if$dq##xxouht)#%#=o)r0'
# SECURITY WARNING: don't run with debug turned on in production!
DEBUG = True
ALLOWED_HOSTS = ['0.0.0.0']
# Application definition
INSTALLED_APPS = [
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'channels',
'chat',
]
MIDDLEWARE = [
'django.middleware.security.SecurityMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
]
ROOT_URLCONF = 'mysite.urls'
TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': [],
'APP_DIRS': True,
'OPTIONS': {
'context_processors': [
'django.template.context_processors.debug',
'django.template.context_processors.request',
'django.contrib.auth.context_processors.auth',
'django.contrib.messages.context_processors.messages',
],
},
},
]
WSGI_APPLICATION = 'mysite.wsgi.application'
ASGI_APPLICATION = 'mysite.routing.application'
CHANNEL_LAYERS = {
'default': {
'BACKEND': 'channels_redis.core.RedisChannelLayer',
'CONFIG': {
"hosts": [('0.0.0.0', 6379)],
},
},
}
# Database
# https://docs.djangoproject.com/en/2.2/ref/settings/#databases
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3',
'NAME': os.path.join(BASE_DIR, 'db.sqlite3'),
}
}
# Password validation
# https://docs.djangoproject.com/en/2.2/ref/settings/#auth-password-validators
AUTH_PASSWORD_VALIDATORS = [
{
'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.MinimumLengthValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.CommonPasswordValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.NumericPasswordValidator',
},
]
# Internationalization
# https://docs.djangoproject.com/en/2.2/topics/i18n/
LANGUAGE_CODE = 'en-us'
TIME_ZONE = 'UTC'
USE_I18N = True
USE_L10N = True
USE_TZ = True
# Static files (CSS, JavaScript, Images)
# https://docs.djangoproject.com/en/2.2/howto/static-files/
STATIC_URL = '/static/'
docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
7966fe4962f7 tourmama_copy_app "sh -c '\n pyth…" 21 minutes ago Up 20 minutes 0.0.0.0:8000->8000/tcp tourmama_copy_app_1
61446d04c4b2 redis:5.0.5-alpine "docker-entrypoint.s…" 27 minutes ago Up 20 minutes 0.0.0.0:6379->6379/tcp tourmama_copy_redis_1
Then the error in console shows :
app_1 | HTTP GET /chat/dadas/ 200 [0.01, 172.19.0.1:39468]
app_1 | WebSocket HANDSHAKING /ws/chat/dadas/ [172.19.0.1:39470]
app_1 | Exception inside application: [Errno 111] Connect call failed ('0.0.0.0', 6379)
app_1 | File "/usr/local/lib/python3.7/site-packages/channels/sessions.py", line 179, in __call__
app_1 | return await self.inner(receive, self.send)
app_1 | File "/usr/local/lib/python3.7/site-packages/channels/middleware.py", line 41, in coroutine_call
app_1 | await inner_instance(receive, send)
app_1 | File "/usr/local/lib/python3.7/site-packages/channels/consumer.py", line 59, in __call__
app_1 | [receive, self.channel_receive], self.dispatch
app_1 | File "/usr/local/lib/python3.7/site-packages/channels/utils.py", line 59, in await_many_dispatch
app_1 | await task
app_1 | File "/usr/local/lib/python3.7/site-packages/channels_redis/core.py", line 425, in receive
app_1 | real_channel
app_1 | File "/usr/local/lib/python3.7/site-packages/channels_redis/core.py", line 477, in receive_single
app_1 | index, channel_key, timeout=self.brpop_timeout
app_1 | File "/usr/local/lib/python3.7/site-packages/channels_redis/core.py", line 324, in _brpop_with_clean
app_1 | async with self.connection(index) as connection:
app_1 | File "/usr/local/lib/python3.7/site-packages/channels_redis/core.py", line 813, in __aenter__
app_1 | self.conn = await self.pool.pop()
app_1 | File "/usr/local/lib/python3.7/site-packages/channels_redis/core.py", line 70, in pop
app_1 | conns.append(await aioredis.create_redis(**self.host, loop=loop))
app_1 | File "/usr/local/lib/python3.7/site-packages/aioredis/commands/__init__.py", line 178, in create_redis
app_1 | loop=loop)
app_1 | File "/usr/local/lib/python3.7/site-packages/aioredis/connection.py", line 108, in create_connection
app_1 | timeout, loop=loop)
app_1 | File "/usr/local/lib/python3.7/asyncio/tasks.py", line 388, in wait_for
app_1 | return await fut
app_1 | File "/usr/local/lib/python3.7/site-packages/aioredis/stream.py", line 19, in open_connection
app_1 | lambda: protocol, host, port, **kwds)
app_1 | File "/usr/local/lib/python3.7/asyncio/base_events.py", line 959, in create_connection
app_1 | raise exceptions[0]
app_1 | File "/usr/local/lib/python3.7/asyncio/base_events.py", line 946, in create_connection
app_1 | await self.sock_connect(sock, address)
app_1 | File "/usr/local/lib/python3.7/asyncio/selector_events.py", line 464, in sock_connect
app_1 | return await fut
app_1 | File "/usr/local/lib/python3.7/asyncio/selector_events.py", line 494, in _sock_connect_cb
app_1 | raise OSError(err, f'Connect call failed {address}')
app_1 | [Errno 111] Connect call failed ('0.0.0.0', 6379)
app_1 | WebSocket DISCONNECT /ws/chat/dadas/ [172.19.0.1:39470]
is anyone has any idea how to connect to redis container from app?
thank you !

You should change :
CHANNEL_LAYERS = {
'default': {
'BACKEND': 'channels_redis.core.RedisChannelLayer',
'CONFIG': {
"hosts": [('0.0.0.0', 6379)],
},
},
}
to
CHANNEL_LAYERS = {
'default': {
'BACKEND': 'channels_redis.core.RedisChannelLayer',
'CONFIG': {
"hosts": [('redis', 6379)],
},
},
}
in your Django settings file.
When you set up containers from compose they are all connected to the default network created by compose. redis is in this case the DNS name of redis container and will be resolved to container ip automatically

Related

how to setup multiple apps on different subdomains using docker compose and 'SteveLTN/https-portal'?

I am using DigitalOcean's droplet to host the apps.
I have different apps that I want to run on different subdomains (sub1.example.com,sub2.example.com, etc).
So far I managed to run just one of them on the main domain (eg: example.com)
The folder structure looks like this:
-/apps
-/app1Folder
docker-compose.yml
Dockerfile
-/codeFolder
-/app2Folder
docker-compose.yml
Dockerfile
-/codeFolder
Each Dockerfile looks like this (ofc the appname and 5001 port is different for each app):
FROM mcr.microsoft.com/dotnet/aspnet:5.0 AS base
WORKDIR /app
FROM mcr.microsoft.com/dotnet/sdk:5.0 AS build
WORKDIR /src
COPY ["App1.csproj", "."]
RUN dotnet restore "./App1.csproj"
COPY . .
WORKDIR "/src/."
RUN dotnet build "App1.csproj" -c Release -o /app/build
FROM build AS publish
RUN dotnet publish "App1.csproj" -c Release -o /app/publish
FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENV ASPNETCORE_URLS http://+:5001
EXPOSE 5001
ENTRYPOINT ["dotnet", "App1.dll"]
Each docker-compose.yml file has the following structure:
version: '1'
services:
app:
build:
context: ./
dockerfile: Dockerfile
https-portal:
image: steveltn/https-portal:1
ports:
- '80:80'
- '443:443'
links:
- app
restart: always
environment:
DOMAINS: 'sub1.example.com -> http://app:5001'
# for app2 we route to 'sub2.example.com'
# STAGE: 'production'
From what I understood so far, I should have only one docker-compose file that should contain all my configs for both apps that i need to run.
I am not sure how to make the routing work or what should i change to make it work.
I am new to all this and don't exactly know where should i start.
Right now after i ran the first app, and it is working, I get the bellow error when trying to 'docker-compose up' the second one.
I think it has something to do with the fact that I am basically trying to run 2 docker images of https-portal that both expose ports 80 and 443 by default. Even if I change the ports on the second app (app2) it still gives the same error.
I am positive that the DNS is configured corectly and propagated to the host.
Any suggestion is welcomed! Please 'halp'! :)
Signing certificates from https://acme-staging-v02.api.letsencrypt.org/directory ...
https-portal_1 | Parsing account key...
https-portal_1 | Parsing CSR...
https-portal_1 | Found domains: sub2.example.com
https-portal_1 | Getting directory...
https-portal_1 | Directory found!
https-portal_1 | Registering account...
https-portal_1 | Already registered!
https-portal_1 | Creating new order...
https-portal_1 | Order created!
https-portal_1 | Verifying sub2.example.com...
https-portal_1 | Traceback (most recent call last):
https-portal_1 | File "/bin/acme_tiny", line 198, in <module>
https-portal_1 | main(sys.argv[1:])
https-portal_1 | File "/bin/acme_tiny", line 194, in main
https-portal_1 | signed_crt = get_crt(args.account_key, args.csr, args.acme_dir, log=LOGGER, CA=args.ca, disable_check=args.disable_check, directory_url=args.directory_url, contact=args.contact)
https-portal_1 | File "/bin/acme_tiny", line 149, in get_crt
https-portal_1 | raise ValueError("Challenge did not pass for {0}: {1}".format(domain, authorization))
https-portal_1 | ValueError: Challenge did not pass for sub2.example.com: {u'status': u'invalid', u'challenges': [{u'status': u'invalid', u'validationRecord': [{u'url': u'http://sub2.example.com/.well-known/acme-challenge/neHJEUpiAjxhvk1nicoRnDaT_xOAIXMaG8MxJstPy14', u'hostname': u'sub2.example.com', u'addressUsed': u'143.198.249.45', u'port': u'80', u'addressesResolved': [u'IP_ADDRESS']}], u'url': u'https://acme-staging-v02.api.letsencrypt.org/acme/chall-v3/4917620723/B0GD9Q', u'token': u'neHJEUpiAjxhvk1nicoRnDaT_xOAIXMaG8MxJstPy14', u'error': {u'status': 400, u'type': u'urn:ietf:params:acme:error:connection', u'detail': u'143.198.249.45: Fetching http://sub2.example.com/.well-known/acme-challenge/neHJEUpiAjxhvk1nicoRnDaT_xOAIXMaG8MxJstPy14: Error getting validation data'}, u'validated': u'2023-01-11T19:31:34Z', u'type': u'http-01'}], u'identifier': {u'type': u'dns', u'value': u'sub2.example.com'}, u'expires': u'2023-01-18T19:31:33Z'}
https-portal_1 | ================================================================================
https-portal_1 | Failed to sign sub2.example.com.
https-portal_1 | Make sure you DNS is configured correctly and is propagated to this host
https-portal_1 | machine. Sometimes that takes a while.
https-portal_1 | ================================================================================
https-portal_1 | Failed to obtain certs for sub2.example.com
https-portal_1 | [cont-init.d] 20-setup: exited 1.
https-portal_1 | [cont-finish.d] executing container finish scripts...

Docker not properly installing python packages using pip install -r requirements.txt

I am pretty new to Docker and Django. I am trying to set up a Django project for a REST-ful API running in a Docker container. I am trying to import the relavent python packages from a RUN command in the dockerfile, however, not all the packages are successfully installing.
Here are the files I'm using and the error I am getting.
Dockerfile:
FROM python:3
ENV PYTHONUNBUFFERED 1
RUN mkdir /code
WORKDIR /code
COPY requirements.txt .
RUN pip install --upgrade pip
RUN pip install -r requirements.txt
COPY . .
docker-compose.yml:
version: '3'
services:
db:
image: postgres
environment:
POSTGRES_PASSWORD: password
web:
build: .
# command: bash -c "pip install -r requirements.txt && python manage.py runserver 0.0.0.0:8000"
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
depends_on:
- db
requirements.txt
djangorestframework
django-filter
markdown
Django
psycopg2
When I execute docker-compose up I get this output
Starting apiTest_db_1 ... done
Recreating apiTest_web_1 ... done
Attaching to apiTest_db_1, apiTest_web_1
db_1 |
db_1 | PostgreSQL Database directory appears to contain a database; Skipping initialization
db_1 |
db_1 | 2020-04-17 21:35:57.022 UTC [1] LOG: starting PostgreSQL 12.2 (Debian 12.2-2.pgdg100+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit
db_1 | 2020-04-17 21:35:57.023 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
db_1 | 2020-04-17 21:35:57.023 UTC [1] LOG: listening on IPv6 address "::", port 5432
db_1 | 2020-04-17 21:35:57.028 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
db_1 | 2020-04-17 21:35:57.075 UTC [27] LOG: database system was shut down at 2020-04-17 21:34:34 UTC
db_1 | 2020-04-17 21:35:57.100 UTC [1] LOG: database system is ready to accept connections
web_1 | Watching for file changes with StatReloader
web_1 | Exception in thread django-main-thread:
web_1 | Traceback (most recent call last):
web_1 | File "/usr/local/lib/python3.8/threading.py", line 932, in _bootstrap_inner
web_1 | self.run()
web_1 | File "/usr/local/lib/python3.8/threading.py", line 870, in run
web_1 | self._target(*self._args, **self._kwargs)
web_1 | File "/usr/local/lib/python3.8/site-packages/django/utils/autoreload.py", line 53, in wrapper
web_1 | fn(*args, **kwargs)
web_1 | File "/usr/local/lib/python3.8/site-packages/django/core/management/commands/runserver.py", line 109, in inner_run
web_1 | autoreload.raise_last_exception()
web_1 | File "/usr/local/lib/python3.8/site-packages/django/utils/autoreload.py", line 76, in raise_last_exception
web_1 | raise _exception[1]
web_1 | File "/usr/local/lib/python3.8/site-packages/django/core/management/__init__.py", line 357, in execute
web_1 | autoreload.check_errors(django.setup)()
web_1 | File "/usr/local/lib/python3.8/site-packages/django/utils/autoreload.py", line 53, in wrapper
web_1 | fn(*args, **kwargs)
web_1 | File "/usr/local/lib/python3.8/site-packages/django/__init__.py", line 24, in setup
web_1 | apps.populate(settings.INSTALLED_APPS)
web_1 | File "/usr/local/lib/python3.8/site-packages/django/apps/registry.py", line 91, in populate
web_1 | app_config = AppConfig.create(entry)
web_1 | File "/usr/local/lib/python3.8/site-packages/django/apps/config.py", line 90, in create
web_1 | module = import_module(entry)
web_1 | File "/usr/local/lib/python3.8/importlib/__init__.py", line 127, in import_module
web_1 | return _bootstrap._gcd_import(name[level:], package, level)
web_1 | File "<frozen importlib._bootstrap>", line 1014, in _gcd_import
web_1 | File "<frozen importlib._bootstrap>", line 991, in _find_and_load
web_1 | File "<frozen importlib._bootstrap>", line 973, in _find_and_load_unlocked
web_1 | ModuleNotFoundError: No module named 'rest_framework'
Which indicates that djangorestframework has not been installed by pip.
Furthermore, when I switch the comented line in the docker-compose.yml file for the line below it (so that section becomes)
command: bash -c "pip install -r requirements.txt && python manage.py runserver 0.0.0.0:8000"
# command: python manage.py runserver 0.0.0.0:8000
Then when I run docker-compose up I get the following output.
Creating network "apiTest_default" with the default driver
Creating apiTest_db_1 ... done
Creating apiTest_web_1 ... done
Attaching to apiTest_db_1, apiTest_web_1
db_1 | The files belonging to this database system will be owned by user "postgres".
db_1 | This user must also own the server process.
db_1 |
db_1 | The database cluster will be initialized with locale "en_US.utf8".
db_1 | The default database encoding has accordingly been set to "UTF8".
db_1 | The default text search configuration will be set to "english".
db_1 |
db_1 | Data page checksums are disabled.
db_1 |
db_1 | fixing permissions on existing directory /var/lib/postgresql/data ... ok
db_1 | creating subdirectories ... ok
db_1 | selecting dynamic shared memory implementation ... posix
db_1 | selecting default max_connections ... 100
db_1 | selecting default shared_buffers ... 128MB
db_1 | selecting default time zone ... Etc/UTC
db_1 | creating configuration files ... ok
db_1 | running bootstrap script ... ok
db_1 | performing post-bootstrap initialization ... ok
web_1 | Collecting djangorestframework
db_1 | syncing data to disk ... initdb: warning: enabling "trust" authentication for local connections
db_1 | You can change this by editing pg_hba.conf or using the option -A, or
db_1 | --auth-local and --auth-host, the next time you run initdb.
db_1 | ok
db_1 |
db_1 |
db_1 | Success. You can now start the database server using:
db_1 |
db_1 | pg_ctl -D /var/lib/postgresql/data -l logfile start
db_1 |
db_1 | waiting for server to start....2020-04-17 22:47:22.783 UTC [46] LOG: starting PostgreSQL 12.2 (Debian 12.2-2.pgdg100+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit
db_1 | 2020-04-17 22:47:22.789 UTC [46] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
web_1 | Downloading djangorestframework-3.11.0-py3-none-any.whl (911 kB)
db_1 | 2020-04-17 22:47:22.823 UTC [47] LOG: database system was shut down at 2020-04-17 22:47:22 UTC
db_1 | 2020-04-17 22:47:22.841 UTC [46] LOG: database system is ready to accept connections
db_1 | done
db_1 | server started
db_1 |
db_1 | /usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/*
db_1 |
db_1 | 2020-04-17 22:47:22.885 UTC [46] LOG: received fast shutdown request
db_1 | waiting for server to shut down....2020-04-17 22:47:22.889 UTC [46] LOG: aborting any active transactions
db_1 | 2020-04-17 22:47:22.908 UTC [46] LOG: background worker "logical replication launcher" (PID 53) exited with exit code 1
db_1 | 2020-04-17 22:47:22.920 UTC [48] LOG: shutting down
db_1 | 2020-04-17 22:47:22.974 UTC [46] LOG: database system is shut down
db_1 | done
db_1 | server stopped
db_1 |
db_1 | PostgreSQL init process complete; ready for start up.
db_1 |
db_1 | 2020-04-17 22:47:23.021 UTC [1] LOG: starting PostgreSQL 12.2 (Debian 12.2-2.pgdg100+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit
db_1 | 2020-04-17 22:47:23.022 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
db_1 | 2020-04-17 22:47:23.023 UTC [1] LOG: listening on IPv6 address "::", port 5432
db_1 | 2020-04-17 22:47:23.036 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
db_1 | 2020-04-17 22:47:23.063 UTC [55] LOG: database system was shut down at 2020-04-17 22:47:22 UTC
db_1 | 2020-04-17 22:47:23.073 UTC [1] LOG: database system is ready to accept connections
web_1 | Collecting django-filter
web_1 | Downloading django_filter-2.2.0-py3-none-any.whl (69 kB)
web_1 | Collecting markdown
web_1 | Downloading Markdown-3.2.1-py2.py3-none-any.whl (88 kB)
web_1 | Requirement already satisfied: Django in /usr/local/lib/python3.8/site-packages (from -r requirements.txt (line 4)) (3.0.5)
web_1 | Requirement already satisfied: psycopg2 in /usr/local/lib/python3.8/site-packages (from -r requirements.txt (line 5)) (2.8.5)
web_1 | Requirement already satisfied: setuptools>=36 in /usr/local/lib/python3.8/site-packages (from markdown->-r requirements.txt (line 3)) (46.1.3)
web_1 | Requirement already satisfied: pytz in /usr/local/lib/python3.8/site-packages (from Django->-r requirements.txt (line 4)) (2019.3)
web_1 | Requirement already satisfied: sqlparse>=0.2.2 in /usr/local/lib/python3.8/site-packages (from Django->-r requirements.txt (line 4)) (0.3.1)
web_1 | Requirement already satisfied: asgiref~=3.2 in /usr/local/lib/python3.8/site-packages (from Django->-r requirements.txt (line 4)) (3.2.7)
web_1 | Installing collected packages: djangorestframework, django-filter, markdown
web_1 | Successfully installed django-filter-2.2.0 djangorestframework-3.11.0 markdown-3.2.1
web_1 | Watching for file changes with StatReloader
web_1 | Performing system checks...
web_1 |
web_1 | System check identified no issues (0 silenced).
web_1 |
web_1 | You have 17 unapplied migration(s). Your project may not work properly until you apply the migrations for app(s): admin, auth, contenttypes, sessions.
web_1 | Run 'python manage.py migrate' to apply them.
web_1 | April 17, 2020 - 22:47:25
web_1 | Django version 3.0.5, using settings 'apiTesting.settings'
web_1 | Starting development server at http://0.0.0.0:8000/
web_1 | Quit the server with CONTROL-C.
Which shows that some packages such as Django have been successfully installed by the Dockerfile but some like djangorestframework, django-filter and markdown have not.
Why is this and what can I do in my Dockerfile to make them correctly install?
Both the main problem and the problem mentioned in the comments of itamar-turner-trauring's answer were solved by instead of running docker-compose up running
docker-compose up --build
Not 100% sure why this fixed it but I'd guess the compose file was loaing up the container from an old image which didn't include the new python packages. So forcing it to rebuild made it include the new python packages.
You are doing two things that potentially conflict:
Inside the image, as part of the build you copy everything in to /code.
In the compose file you mount current working directory into /code.
I am not sure that's the problem, but I suggest removing the volumes bit from the compose.yml and see if that help.

docker: vimagick/stunnel ==> /entrypoint.sh: line 21: openssl: not found

I am working on a ubuntu 18.04.4 LTS VM, where I have docker and docker-compose installed.
I am using a vimagick / stunnel image to build a tunnel against a client for quickFix services.
Problem: In a new installation, when I raise the docker-compose file, throw the following error:
tunnel_primary_1 | chmod: stunnel.pem: No such file or directory
tunnel_primary_1 | [ ] Clients allowed=512000
tunnel_primary_1 | [.] stunnel 5.56 on x86_64-alpine-linux-musl platform
tunnel_primary_1 | [.] Compiled/running with OpenSSL 1.1.1d 10 Sep 2019
tunnel_primary_1 | [.] Threading:PTHREAD Sockets:POLL,IPv6 TLS:ENGINE,OCSP,PSK,SNI
tunnel_primary_1 | [ ] errno: (*__errno_location())
tunnel_primary_1 | [.] Reading configuration from file /etc/stunnel/stunnel.conf
tunnel_primary_1 | [.] UTF-8 byte order mark not detected
tunnel_primary_1 | [ ] No PRNG seeding was required
tunnel_primary_1 | [ ] Initializing service [quickfix]
tunnel_primary_1 | [ ] Ciphers: HIGH:!aNULL:!SSLv2:!DH:!kDHEPSK
tunnel_primary_1 | [ ] TLSv1.3 ciphersuites: TLS_CHACHA20_POLY1305_SHA256:TLS_AES_256_GCM_SHA384:TLS_AES_128_GCM_SHA256
tunnel_primary_1 | [ ] TLS options: 0x02100004 (+0x00000000, -0x00000000)
tunnel_primary_1 | [ ] Loading certificate from file: /etc/stunnel/stunnel.pem
tunnel_primary_1 | [!] error queue: ssl/ssl_rsa.c:615: error:140DC002:SSL routines:use_certificate_chain_file:system lib
tunnel_primary_1 | [!] error queue: crypto/bio/bss_file.c:290: error:20074002:BIO routines:file_ctrl:system lib
tunnel_primary_1 | [!] SSL_CTX_use_certificate_chain_file: crypto/bio/bss_file.c:288: error:02001002:system library:fopen:No such file or directory
tunnel_primary_1 | [!] Service [quickfix]: Failed to initialize TLS context
tunnel_primary_1 | [ ] Deallocating section defaults
prueba1_tunnel_primary_1 exited with code 1
This is mi docker-compose.yml:
version: '3'
services:
tunnel_primary:
image: vimagick/stunnel
ports:
- "6789:6789"
environment:
- CLIENT=yes
- SERVICE=quickfix
- ACCEPT=0.0.0.0:6789
- CONNECT=11.11.11.11:1234
logging:
driver: "json-file"
options:
max-size: "1024k"
max-file: "10"
In the VM that is in production it works and there is no installation dif. Yes, the image of docker vimagick / stunnel that I use in production is 7 months ago
Thank!!!!!
This docker image is broken since they switched to libressl (without updating their launch script that still uses openssl).
There is a pull request fixing this issue that will (hopefully) be merged.
In the meantime you can fork the repo containing the docker file and modify dockerfiles/stunnel/docker-entrypoint.sh by replacing openssl to libressl.
I ended up recreating a new image on docker hub, use prokofyevdmitry/stunnel instead of vimagik/stunnel inside your docker-compose.yml file

Dockerized Phoenix/Elixir App Rejects All HTTP/socket requests

I'm trying to follow along with this tutorial to get my (functioning on localhost) elixir/phoenix app running in a docker
container and I'm running into difficulties.
https://pspdfkit.com/blog/2018/how-to-run-your-phoenix-application-with-docker/
Here is my error:
[info] JOIN "room:lobby" to AlbatrossWeb.RoomChannel
phoenix_1 | Transport: Phoenix.Transports.WebSocket (2.0.0)
phoenix_1 | Serializer: Phoenix.Transports.V2.WebSocketSerializer
phoenix_1 | Parameters: %{}
phoenix_1 | inside room:lobby channel handler
phoenix_1 | [info] Replied room:lobby :ok
phoenix_1 | [error] Ranch protocol #PID<0.403.0> of listener AlbatrossWeb.Endpoint.HTTP (cowboy_protocol) terminated
phoenix_1 | ** (exit) exited in: Phoenix.Endpoint.CowboyWebSocket.resume()
phoenix_1 | ** (EXIT) an exception was raised:
phoenix_1 | ** (Protocol.UndefinedError) got FunctionClauseError with message "no function clause matching in Poison.Encoder.__protocol__/1" while retrieving Exception.message/1 for %Protocol.UndefinedError{description: "", protocol: Poison.Encoder, value: ["127", "127", "room:lobby", "phx_reply", %{response: %{}, status: :ok}]}
phoenix_1 | (poison) lib/poison/encoder.ex:66: Poison.Encoder.impl_for!/1
phoenix_1 | (poison) lib/poison/encoder.ex:69: Poison.Encoder.encode/2
phoenix_1 | (poison) lib/poison.ex:41: Poison.encode!/2
phoenix_1 | (phoenix) lib/phoenix/transports/v2/websocket_serializer.ex:22: Phoenix.Transports.V2.WebSocketSerializer.encode!/1
phoenix_1 | (phoenix) lib/phoenix/transports/websocket.ex:197: Phoenix.Transports.WebSocket.encode_reply/2
phoenix_1 | (phoenix) lib/phoenix/endpoint/cowboy_websocket.ex:77: Phoenix.Endpoint.CowboyWebSocket.websocket_handle/3
phoenix_1 | (cowboy) /app/deps/cowboy/src/cowboy_websocket.erl:588: :cowboy_websocket.handler_call/7
phoenix_1 | (phoenix) lib/phoenix/endpoint/cowboy_websocket.ex:49: Phoenix.Endpoint.CowboyWebSocket.resume/3
phoenix_1 | (cowboy) /app/deps/cowboy/src/cowboy_protocol.erl:442: :cowboy_protocol.execute/4
phoenix_1 | [info] JOIN "room:lobby" to AlbatrossWeb.RoomChannel
<....repeat forever....>
I'm not sure what is going on.
My room lobby is simply a socket channel defined room_channel.ex as:
###room_channel.ex###
defmodule AlbatrossWeb.RoomChannel do
use Phoenix.Channel
def join("room:lobby", _message, socket) do
IO.puts "inside room:lobby channel handler"
{:ok, socket}
end
def join("room:" <> _private_room_id, _params, _socket) do
{:error, %{reason: "unauthorized"}}
end
def handle_in("updated_comments", %{"payload"=>payload}, socket) do
IO.puts("inside updated_comments handle_in")
broadcast! socket, "updated_comments", payload
# ArticleController.retrieve(socket)
{:noreply, socket}
end
end
###room_channel.ex###
It runs fine when I run this without my docker files - what I added is the following:
###run.sh###
docker-compose up --build
###run.sh###
###Dockerfile###
FROM elixir:latest
RUN apt-get update && \
apt-get install -y postgresql-client
# Create app directory and copy the Elixir projects into it
RUN mkdir /app
COPY . /app
WORKDIR /app
# Install hex package manager
RUN mix local.hex --force
# Compile the project
RUN mix do compile
CMD ["/app/entrypoint.sh"]
###Dockerfile###
###docker-compose###
# Version of docker-compose
version: '3'
# Containers we are going to run
services:
# Our Phoenix container
phoenix:
# The build parameters for this container.
build:
# Here we define that it should build from the current directory
context: .
environment:
# Variables to connect to our Postgres server
PGUSER: postgres
PGPASSWORD: postgres
PGDATABASE: db
PGPORT: 5432
# Hostname of our Postgres container
PGHOST: db
ports:
# Mapping the port to make the Phoenix app accessible outside of the container
- "4000:4000"
depends_on:
# The db container needs to be started before we start this container
- db
db:
# We use the predefined Postgres image
image: postgres:9.6
environment:
# Set user/password for Postgres
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
# Set a path where Postgres should store the data
PGDATA: /var/lib/postgresql/data/pgdata
restart: always
volumes:
- pgdata:/var/lib/postgresql/data
# Define the volumes
volumes:
pgdata:
###docker-compose###
###entrypoint.sh###
#!/bin/bash
while ! pg_isready -q -h $PGHOST -p $PGPORT -U $PGUSER
do
echo "$(date) - waiting for database to start"
sleep 2
done
# Create, migrate, and seed database if it doesn't exist.
if [[ -z `psql -Atqc "\\list $PGDATABASE"` ]]; then
echo "Database $PGDATABASE does not exist. Creating..."
createdb -E UTF8 $PGDATABASE -l en_US.UTF-8 -T template0
echo "1"
mix do ecto.drop, ecto.create
echo "2"
mix phx.gen.schema Binarys binary postnum:integer leftchild:integer rightchild:integer downvotes:integer message:string parent:string upvotes:integer
echo "3"
mix phx.gen.schema Comments comment postnum:integer children:map downvotes:integer message:string parent:string upvotes:integer identifier:uuid
echo "4"
mix ecto.migrate
echo "5"
mix run priv/repo/seeds.exs
echo "Database $PGDATABASE created."
fi
exec mix phx.server
###entrypoint.sh###
I also changed the config in my dev.exs file like this:
###dev.exs###
config :albatross, Albatross.Repo,
adapter: Ecto.Adapters.Postgres,
username: "postgres",
password: "postgres",
hostname: "db",
database: "db",
# port: 5432,
pool_size: 10
###dev.exs###
Interestingly all of these errors seem to spawn when my frontend is up, but not making requests (other than connecting to the socket). If I try and make an http request I get this:
phoenix_1 | [info] POST /addComment
phoenix_1 | inside addComment
phoenix_1 | [debug] Processing with AlbatrossWeb.PageController.addComment/2
phoenix_1 | Parameters: %{"payload" => %{"message" => "sf", "parent" => "no_parent", "postnum" => 6, "requestType" => "post", "urlKEY" => "addComment"}}
phoenix_1 | Pipelines: [:browser]
phoenix_1 | [error] Failure while translating Erlang's logger event
phoenix_1 | ** (Protocol.UndefinedError) got FunctionClauseError with message "no function clause matching in Plug.Exception.__protocol__/1" while retrieving Exception.message/1 for %Protocol.UndefinedError{description: "", protocol: Plug.Exception, value: %Protocol.UndefinedError{description: "", protocol: Plug.Exception, value: %Protocol.UndefinedError{description: "", protocol: Plug.Exception, value: %Protocol.UndefinedError{description: "", protocol: String.Chars, value: %Postgrex.Query{columns: nil, name: "", param_formats: nil, param_oids: nil, param_types: nil, ref: nil, result_formats: nil, result_oids: nil, result_types: nil, statement: ["INSERT INTO ", [34, "comment", 34], [], [32, 40, [[[[[[[[[[], [34, "children", 34], 44], [34, "downvotes", 34], 44], [34, "identifier", 34], 44], [34, "message", 34], 44], [34, "parent", 34], 44], [34, "postnum", 34], 44], [34, "upvotes", 34], 44], [34, "inserted_at", 34], 44], 34, "updated_at", 34], ") VALUES ", [], 40, [[[[[[[[[[], [36 | "1"], 44], [36 | "2"], 44], [36 | "3"], 44], [36 | "4"], 44], [36 | "5"], 44], [36 | "6"], 44], [36 | "7"], 44], [36 | "8"], 44], 36 | "9"], 41], [], " RETURNING ", [], 34, "id", 34], types: nil}}}}}
phoenix_1 | (plug) lib/plug/exceptions.ex:4: Plug.Exception.impl_for!/1
phoenix_1 | (plug) lib/plug/exceptions.ex:19: Plug.Exception.status/1
phoenix_1 | (plug) lib/plug/adapters/translator.ex:79: Plug.Adapters.Translator.non_500_exception?/1
phoenix_1 | (plug) lib/plug/adapters/translator.ex:49: Plug.Adapters.Translator.translate_ranch/5
phoenix_1 | (logger) lib/logger/erlang_handler.ex:104: Logger.ErlangHandler.translate/6
phoenix_1 | (logger) lib/logger/erlang_handler.ex:97: Logger.ErlangHandler.translate/5
phoenix_1 | (logger) lib/logger/erlang_handler.ex:30: anonymous fn/3 in Logger.ErlangHandler.log/2
phoenix_1 | (logger) lib/logger.ex:861: Logger.normalize_message/2
phoenix_1 | (logger) lib/logger.ex:684: Logger.__do_log__/3
phoenix_1 | (kernel) logger_backend.erl:51: :logger_backend.call_handlers/3
phoenix_1 | (kernel) logger_backend.erl:38: :logger_backend.log_allowed/2
phoenix_1 | (ranch) /app/deps/ranch/src/ranch_conns_sup.erl:167: :ranch_conns_sup.loop/4
phoenix_1 | (stdlib) proc_lib.erl:249: :proc_lib.init_p_do_apply/3
phoenix_1 |
So you can see that it sees the request and it appears to be manipulating it. It just can't return it. I have my ports exposed in both my Docker and docker-compose files, I really can't see what else could be going wrong as I have this app working when I run it outside the docker containers.
What is going wrong?
I think the problem lies in your Dockerfile.
You didn't expose any port.
To be able to publish port, you need to first expose the post.
Try adding EXPOSE 4000 in your Dockerfile.
Not enough reputation to reply to other answer, but I wanted to inform potential readers that the EXPOSE instruction is nothing but documentation. It is not necessary to expose a port before publishing it.
From the official docker documentation:
The EXPOSE instruction does not actually publish the port. It functions as a type of documentation between the person who builds the image and the person who runs the container, about which ports are intended to be published. To actually publish the port when running the container, use the -p flag on docker run to publish and map one or more ports, or the -P flag to publish all exposed ports and map them to high-order ports.
Sorry to add this as an answer but I am facing some problems while trying to create a docker container for an Elixir/Phoenix app, I am only creating a REST API (no html) typoe of project and it works perfectly locally but when I create the docker image and container and I run the container, the app runs perfectly but when I try from postman I always get an error saying socket hang up which is not so clear, please check below the errors from postman.
picture socket hang up error 1
picture socket hang up error 2
This is my Dockerfile:
FROM elixir:alpine
RUN mkdir /app
COPY . /app
WORKDIR /app
RUN apk update && apk add inotify-tools
RUN mix local.hex --force && mix local.rebar --force
RUN mix do deps.get, deps.compile
EXPOSE 4000
CMD ["mix", "phx.server"]
You have to know that I am trying to be the simplest as I can since I am learning elixir and phoenix yet and I want to keep things clear for me so I don't have any networking or security with docker-compose or any other integration. I just want to run my REST API made in Elixir/Phoenix inside a docker container.

Unable to access CKAN portal using DOCKER

I'm trying to run ckan in docker container following the steps in: http://docs.ckan.org/en/ckan-2.4.7/maintaining/installing/install-using-docker.html
Links for the images available from : https://hub.docker.com/u/ckan/ seems to be updated at the time of post this questions (2 days ago).
Well, I have followed the steps:
$ docker run -d --name db ckan/postgresql
$ docker run -d --name solr ckan/solr
$ docker run -d -p 80:80 --link db:db --link solr:solr ckan/ckan
And everythings ok, but the question is, How can I access to ckan portal???
Using docker inspect <ckan_image> I get something like this:
"NetworkSettings": {
"Bridge": "",
"SandboxID": "c66a4d1bb1a27c160f1655a9c660d24337e85053e8a8ad1e1a2c570ed217223e",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {
"5000/tcp": null,
"80/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "80"
}
]
},
"SandboxKey": "/var/run/docker/netns/c66a4d1bb1a2",
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "7985fc49cc7795b668ca4dfc5812f0ffa40f305f29a7726b15947890051f2014",
"Gateway": "172.17.0.1",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "172.17.0.4",
"IPPrefixLen": 16,
"IPv6Gateway": "",
"MacAddress": "02:42:ac:11:00:04",
"Networks": {
"bridge": {
"IPAMConfig": null,
"Links": null,
"Aliases": null,
"NetworkID": "88de6de00bdbc9974e48021ff783378835fc99d09582b8f7ccaab363a605a499",
"EndpointID": "7985fc49cc7795b668ca4dfc5812f0ffa40f305f29a7726b15947890051f2014",
"Gateway": "172.17.0.1",
"IPAddress": "172.17.0.4",
"IPPrefixLen": 16,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "02:42:ac:11:00:04",
"DriverOpts": null
}
}
}
Port exposed are 5000 and 80 so, using this ip address and this port it should be accessible.
Am I missing something?
Accessing solr is correct in: http://localhost:8983/solr/#/
UPDATE 1
Following help of Tarun Lalwani and using docker-compose, I think there is a problem with ckan and solr. This is the output error:
See the second line:
Invalid URL u'http://:/solr/ckan/select/?q=%2A%3A%2A&rows=1&wt=json': No host supplied
ckan_1 | 2017-07-31 11:23:37,622 INFO [ckan.config.environment] Loading static files from public
****ckan_1 | 2017-07-31 11:23:37,916 ERROR [ckan.lib.search.common] Invalid URL u'http://:/solr/ckan/select/?q=%2A%3A%2A&rows=1&wt=json': No host supplied****
ckan_1 | Traceback (most recent call last):
ckan_1 | File "/usr/lib/ckan/default/src/ckan/ckan/lib/search/common.py", line 57, in is_available
ckan_1 | conn.search(q="*:*", rows=1)
ckan_1 | File "/usr/lib/ckan/default/local/lib/python2.7/site-packages/pysolr.py", line 720, in search
ckan_1 | response = self._select(params, handler=search_handler)
ckan_1 | File "/usr/lib/ckan/default/local/lib/python2.7/site-packages/pysolr.py", line 418, in _select
ckan_1 | return self._send_request('get', path)
ckan_1 | File "/usr/lib/ckan/default/local/lib/python2.7/site-packages/pysolr.py", line 366, in _send_request
ckan_1 | timeout=self.timeout)
ckan_1 | File "/usr/lib/ckan/default/local/lib/python2.7/site-packages/requests/sessions.py", line 515, in get
ckan_1 | return self.request('GET', url, **kwargs)
ckan_1 | File "/usr/lib/ckan/default/local/lib/python2.7/site-packages/requests/sessions.py", line 488, in request
ckan_1 | prep = self.prepare_request(req)
ckan_1 | File "/usr/lib/ckan/default/local/lib/python2.7/site-packages/requests/sessions.py", line 431, in prepare_request
ckan_1 | hooks=merge_hooks(request.hooks, self.hooks),
ckan_1 | File "/usr/lib/ckan/default/local/lib/python2.7/site-packages/requests/models.py", line 305, in prepare
ckan_1 | self.prepare_url(url, params)
ckan_1 | File "/usr/lib/ckan/default/local/lib/python2.7/site-packages/requests/models.py", line 382, in prepare_url
ckan_1 | raise InvalidURL("Invalid URL %r: No host supplied" % url)
ckan_1 | InvalidURL: Invalid URL u'http://:/solr/ckan/select/?q=%2A%3A%2A&rows=1&wt=json': No host supplied
ckan_1 | 2017-07-31 11:23:38,106 WARNI [ckan.lib.search] Problems were found while connecting to the SOLR server
ckan_1 | 2017-07-31 11:23:38,183 INFO [ckan.config.environment] Loading templates from /usr/lib/ckan/default/src/ckan/ckan/templates
ckan_1 | Traceback (most recent call last):
ckan_1 | File "/usr/local/bin/ckan-paster", line 11, in <module>
ckan_1 | sys.exit(run())
ckan_1 | File "/usr/lib/ckan/default/local/lib/python2.7/site-packages/paste/script/command.py", line 102, in run
ckan_1 | invoke(command, command_name, options, args[1:])
ckan_1 | File "/usr/lib/ckan/default/local/lib/python2.7/site-packages/paste/script/command.py", line 141, in invoke
ckan_1 | exit_code = runner.run(args)
ckan_1 | File "/usr/lib/ckan/default/local/lib/python2.7/site-packages/paste/script/command.py", line 236, in run
ckan_1 | result = self.command()
ckan_1 | File "/usr/lib/ckan/default/src/ckan/ckan/lib/cli.py", line 337, in command
ckan_1 | self._load_config(cmd!='upgrade')
ckan_1 | File "/usr/lib/ckan/default/src/ckan/ckan/lib/cli.py", line 310, in _load_config
ckan_1 | self.site_user = load_config(self.options.config, load_site_user)
ckan_1 | File "/usr/lib/ckan/default/src/ckan/ckan/lib/cli.py", line 225, in load_config
ckan_1 | load_environment(conf.global_conf, conf.local_conf)
ckan_1 | File "/usr/lib/ckan/default/src/ckan/ckan/config/environment.py", line 111, in load_environment
ckan_1 | p.load_all()
ckan_1 | File "/usr/lib/ckan/default/src/ckan/ckan/plugins/core.py", line 129, in load_all
ckan_1 | unload_all()
ckan_1 | File "/usr/lib/ckan/default/src/ckan/ckan/plugins/core.py", line 182, in unload_all
ckan_1 | unload(*reversed(_PLUGINS))
ckan_1 | File "/usr/lib/ckan/default/src/ckan/ckan/plugins/core.py", line 210, in unload
ckan_1 | plugins_update()
ckan_1 | File "/usr/lib/ckan/default/src/ckan/ckan/plugins/core.py", line 121, in plugins_update
ckan_1 | environment.update_config()
ckan_1 | File "/usr/lib/ckan/default/src/ckan/ckan/config/environment.py", line 289, in update_config
ckan_1 | engine = sqlalchemy.engine_from_config(config, client_encoding='utf8')
ckan_1 | File "/usr/lib/ckan/default/local/lib/python2.7/site-packages/sqlalchemy/engine/__init__.py", line 428, in engine_from_config
ckan_1 | return create_engine(url, **options)
ckan_1 | File "/usr/lib/ckan/default/local/lib/python2.7/site-packages/sqlalchemy/engine/__init__.py", line 387, in create_engine
ckan_1 | return strategy.create(*args, **kwargs)
ckan_1 | File "/usr/lib/ckan/default/local/lib/python2.7/site-packages/sqlalchemy/engine/strategies.py", line 50, in create
ckan_1 | u = url.make_url(name_or_url)
ckan_1 | File "/usr/lib/ckan/default/local/lib/python2.7/site-packages/sqlalchemy/engine/url.py", line 194, in make_url
ckan_1 | return _parse_rfc1738_args(name_or_url)
ckan_1 | File "/usr/lib/ckan/default/local/lib/python2.7/site-packages/sqlalchemy/engine/url.py", line 240, in _parse_rfc1738_args
ckan_1 | return URL(name, **components)
ckan_1 | File "/usr/lib/ckan/default/local/lib/python2.7/site-packages/sqlalchemy/engine/url.py", line 60, in __init__
ckan_1 | self.port = int(port)
ckan_1 | ValueError: invalid literal for int() with base 10: ''
After this I get:
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d44df7bad12f ckan/solr "docker-entrypoint..." 2 hours ago Up 19 minutes 8983/tcp dockercompose_solr_1
8f0c6c815746 ckan/postgresql "docker-entrypoint..." 2 hours ago Up 19 minutes 5432/tcp dockercompose_db_1
The rest seems to be ok.
UPDATE 2
Updated my docker-compose file. I have been made several test. Finally this combination seems to work. Solr works ok (just change the public port with kitematic), postgresql has the database and table and I can access correctly but I get Internal Server Error, so I think I'm close.
My docker-compose file:
version: '3'
services:
solr:
container_name: solr
#Possible options solr
#image: milafrerichs/ckan_solr
#image: miguelbgouveia/solr-docker
image: ckan/solr:dev-v2.6
ports:
- "8983:8983/tcp"
db:
container_name: db
image: ckan/postgresql
ports:
- "5432:5432/tcp"
ckan:
container_name: ckan
image: ckan/ckan:dev-v2.6
depends_on:
- solr
- db
links:
- db:db
ports:
- "5000:5000"
- "80:80"
environment:
DATABASE_URL: "postgresql://ckan:ckan#db:5432/ckan"
SOLR_URL: "http://solr:8983/solr/ckan"
Looking at the apache2 logs, I have no see anything interesting.
This is the complete output:
$ docker-compose -f docker-compose-ckan.yml up
Attaching to db, solr, ckan
solr | Starting Solr on port 8983 from /opt/solr/server
solr |
solr | 0 INFO (main) [ ] o.e.j.u.log Logging initialized #2757ms
solr | 1711 INFO (main) [ ] o.e.j.s.Server jetty-9.3.8.v20160314
db | running bootstrap script ... ok
db | performing post-bootstrap initialization ... ok
db | syncing data to disk ... ok
db |
ckan | Distribution already installed:
ckan | ckan 2.6.3 from /usr/lib/ckan/default/src/ckan
ckan | Creating /etc/ckan/default/ckan.ini
ckan | Now you should edit the config files
ckan | /etc/ckan/default/ckan.ini
ckan | Edited option sqlalchemy.url = "postgresql://ckan_default:pass#localhost/ckan_default"->"postgresql://ckan:ckan#db:5432/ckan" (section "app:main")
ckan | Edited option ckan.site_url = ""->"http://192.168.0.6" (section "app:main")
ckan | Option uncommented and set solr_url = "http://solr:8983/solr/ckan" (section "app:main")
ckan | Option uncommented and set ckan.storage_path = "/var/lib/ckan" (section "app:main")
ckan | Option uncommented and set email_to = "disabled#example.com" (section "app:main")
ckan | Option uncommented and set error_email_from = "ckan#95e87010bd4d" (section "app:main")
solr | 1803 INFO (main) [ ] o.e.j.d.p.ScanningAppProvider Deployment monitor [file:///opt/solr/server/contexts/] at interval 0
solr | 4046 INFO (main) [ ] o.e.j.w.StandardDescriptorProcessor NO JSP Support for /solr, did not find org.apache.jasper.servlet.JspServlet
solr | 4080 WARN (main) [ ] o.e.j.s.SecurityHandler ServletContext#o.e.j.w.WebAppContext#13a5fe33{/solr,file:///opt/solr/server/solr-webapp/webapp/,STARTING}{/opt/solr/server/solr-webapp/webapp} has uncovered http methods for path: /
solr | 4118 INFO (main) [ ] o.a.s.s.SolrDispatchFilter SolrDispatchFilter.init(): WebAppClassLoader=1740189450#67b92f0a
solr | 4163 INFO (main) [ ] o.a.s.c.SolrResourceLoader JNDI not configured for solr (NoInitialContextEx)
solr | 4169 INFO (main) [ ] o.a.s.c.SolrResourceLoader using system property solr.solr.home: /opt/solr/server/solr
solr | 4174 INFO (main) [ ] o.a.s.c.SolrResourceLoader new SolrResourceLoader for directory: '/opt/solr/server/solr'
solr | 4179 INFO (main) [ ] o.a.s.c.SolrResourceLoader JNDI not configured for solr (NoInitialContextEx)
solr | 4179 INFO (main) [ ] o.a.s.c.SolrResourceLoader using system property solr.solr.home: /opt/solr/server/solr
db | LOG: autovacuum launcher started
solr | 4186 INFO (main) [ ] o.a.s.c.SolrXmlConfig Loading container configuration from /opt/solr/server/solr/solr.xml
solr | 4455 INFO (main) [ ] o.a.s.c.CorePropertiesLocator Config-defined core root directory: /opt/solr/server/solr
db | done
db | server started
db | done
db | server stopped
db |
db | PostgreSQL init process complete; ready for start up.
db |
db | LOG: database system was shut down at 2017-08-01 22:58:52 UTC
db | LOG: MultiXact member wraparound protections are now enabled
db | LOG: database system is ready to accept connections
solr | 5404 INFO (main) [ ] o.a.s.h.c.HttpShardHandlerFactory created with socketTimeout : 600000,connTimeout : 60000,maxConnectionsPerHost : 20,maxConnections : 10000,corePoolSize : 0,maximumPoolSize : 2147483647,maxThreadIdleTime : 5,sizeOfQueue : -1,fairnessPolicy : false,useRetries : false,
solr | 6145 INFO (main) [ ] o.a.s.u.UpdateShardHandler Creating UpdateShardHandler HTTP client with params: socketTimeout=600000&connTimeout=60000&retry=true
solr | 6153 INFO (main) [ ] o.a.s.l.LogWatcher SLF4J impl is org.slf4j.impl.Log4jLoggerFactory
solr | 6160 INFO (main) [ ] o.a.s.l.LogWatcher Registering Log Listener [Log4j (org.slf4j.impl.Log4jLoggerFactory)]
solr | 6163 INFO (main) [ ] o.a.s.c.CoreContainer Security conf doesn't exist. Skipping setup for authorization module.
solr | 6165 INFO (main) [ ] o.a.s.c.CoreContainer No authentication plugin used.
solr | 6341 INFO (main) [ ] o.a.s.c.CorePropertiesLocator Looking for core definitions underneath /opt/solr/server/solr
solr | 6353 INFO (main) [ ] o.a.s.c.CoreDescriptor Created CoreDescriptor: {name=ckan, config=solrconfig.xml, loadOnStartup=true, schema=schema.xml, configSetProperties=configsetprops.json, transient=false, dataDir=data/}
solr | 6356 INFO (main) [ ] o.a.s.c.CorePropertiesLocator Found core ckan in /opt/solr/server/solr/ckan
ckan | *** Running /etc/my_init.d/70_initdb...
**ckan | 2017-08-01 22:58:54,440 ERROR [pysolr] Failed to connect to server at 'http://solr:8983/solr/ckan/select/?q=%2A%3A%2A&rows=1&wt=json', are you sure that URL is correct? Checking it in a browser might help: HTTPConnectionPool(host='solr', port=8983): Max retries exceeded with url: /solr/ckan/select/?q=%2A%3A%2A&rows=1&wt=json (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0x7fb54cbe9050>: Failed to establish a new connection: [Errno 111] Connection refused',))**
ckan | Traceback (most recent call last):
ckan | File "/usr/lib/ckan/default/local/lib/python2.7/site-packages/pysolr.py", line 361, in _send_request
ckan | timeout=self.timeout)
ckan | File "/usr/lib/ckan/default/local/lib/python2.7/site-packages/requests/sessions.py", line 487, in get
ckan | return self.request('GET', url, **kwargs)
ckan | File "/usr/lib/ckan/default/local/lib/python2.7/site-packages/requests/sessions.py", line 475, in request
ckan | resp = self.send(prep, **send_kwargs)
ckan | File "/usr/lib/ckan/default/local/lib/python2.7/site-packages/requests/sessions.py", line 585, in send
ckan | r = adapter.send(request, **kwargs)
ckan | File "/usr/lib/ckan/default/local/lib/python2.7/site-packages/requests/adapters.py", line 467, in send
ckan | raise ConnectionError(e, request=request)
**ckan | ConnectionError: HTTPConnectionPool(host='solr', port=8983): Max retries exceeded with url: /solr/ckan/select/?q=%2A%3A%2A&rows=1&wt=json (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0x7fb54cbe9050>: Failed to establish a new connection: [Errno 111] Connection refused',))**
solr | 6399 INFO (main) [ ] o.a.s.c.CorePropertiesLocator Found 1 core definitions
solr | 6665 INFO (main) [ ] o.a.s.s.SolrDispatchFilter user.dir=/opt/solr/server
ckan | 2017-08-01 22:58:54,444 ERROR [ckan.lib.search.common] Failed to connect to server at 'http://solr:8983/solr/ckan/select/?q=%2A%3A%2A&rows=1&wt=json', are you sure that URL is correct? Checking it in a browser might help: HTTPConnectionPool(host='solr', port=8983): Max retries exceeded with url: /solr/ckan/select/?q=%2A%3A%2A&rows=1&wt=json (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0x7fb54cbe9050>: Failed to establish a new connection: [Errno 111] Connection refused',))
ckan | Traceback (most recent call last):
ckan | File "/usr/lib/ckan/default/src/ckan/ckan/lib/search/common.py", line 56, in is_available
ckan | conn.search(q="*:*", rows=1)
ckan | File "/usr/lib/ckan/default/local/lib/python2.7/site-packages/pysolr.py", line 710, in search
ckan | response = self._select(params, handler=search_handler)
ckan | File "/usr/lib/ckan/default/local/lib/python2.7/site-packages/pysolr.py", line 411, in _select
ckan | return self._send_request('get', path)
ckan | File "/usr/lib/ckan/default/local/lib/python2.7/site-packages/pysolr.py", line 370, in _send_request
ckan | raise SolrError(error_message % params)
ckan | SolrError: Failed to connect to server at 'http://solr:8983/solr/ckan/select/?q=%2A%3A%2A&rows=1&wt=json', are you sure that URL is correct? Checking it in a browser might help: HTTPConnectionPool(host='solr', port=8983): Max retries exceeded with url: /solr/ckan/select/?q=%2A%3A%2A&rows=1&wt=json (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0x7fb54cbe9050>: Failed to establish a new connection: [Errno 111] Connection refused',))
ckan | 2017-08-01 22:58:54,444 WARNI [ckan.lib.search] Problems were found while connecting to the SOLR server
ckan | 2017-08-01 22:58:55,458 ERROR [pysolr] Solr responded with an error (HTTP 503): [Reason: Error 503 {metadata={error-class=org.apache.solr.common.SolrException,root-error-class=org.apache.solr.common.SolrException},msg=SolrCore is loading,code=503}]
ckan | 2017-08-01 22:58:55,458 ERROR [ckan.lib.search.common] Solr responded with an error (HTTP 503): [Reason: Error 503 {metadata={error-class=org.apache.solr.common.SolrException,root-error-class=org.apache.solr.common.SolrException},msg=SolrCore is loading,code=503}]
ckan | Traceback (most recent call last):
ckan | File "/usr/lib/ckan/default/src/ckan/ckan/lib/search/common.py", line 56, in is_available
ckan | conn.search(q="*:*", rows=1)
ckan | File "/usr/lib/ckan/default/local/lib/python2.7/site-packages/pysolr.py", line 710, in search
ckan | response = self._select(params, handler=search_handler)
ckan | File "/usr/lib/ckan/default/local/lib/python2.7/site-packages/pysolr.py", line 411, in _select
ckan | return self._send_request('get', path)
ckan | File "/usr/lib/ckan/default/local/lib/python2.7/site-packages/pysolr.py", line 386, in _send_request
ckan | raise SolrError(error_message % (resp.status_code, solr_message))
ckan | SolrError: Solr responded with an error (HTTP 503): [Reason: Error 503 {metadata={error-class=org.apache.solr.common.SolrException,root-error-class=org.apache.solr.common.SolrException},msg=SolrCore is loading,code=503}]
db | ERROR: relation "user" does not exist at character 465
db | STATEMENT: SELECT "user".password AS user_password, "user".id AS user_id, "user".name AS user_name, "user".openid AS user_openid, "user".fullname AS user_fullname, "user".email AS user_email, "user".apikey AS user_apikey, "user".created AS user_created, "user".reset_key AS user_reset_key, "user".about AS user_about, "user".activity_streams_email_notifications AS user_activity_streams_email_notifications, "user".sysadmin AS user_sysadmin, "user".state AS user_state
db | FROM "user"
db | WHERE "user".name = 'default' OR "user".id = 'default' ORDER BY "user".name
db | LIMIT 1
solr | 13480 INFO (coreLoadExecutor-6-thread-1) [ x:ckan] o.a.s.r.RestManager Initializing 0 registered ManagedResources
db | ERROR: relation "user" does not exist at character 465
db | STATEMENT: SELECT "user".password AS user_password, "user".id AS user_id, "user".name AS user_name, "user".openid AS user_openid, "user".fullname AS user_fullname, "user".email AS user_email, "user".apikey AS user_apikey, "user".created AS user_created, "user".reset_key AS user_reset_key, "user".about AS user_about, "user".activity_streams_email_notifications AS user_activity_streams_email_notifications, "user".sysadmin AS user_sysadmin, "user".state AS user_state
db | FROM "user"
db | WHERE "user".name = 'default' OR "user".id = 'default' ORDER BY "user".name
db | LIMIT 1
solr | 13577 INFO (coreLoadExecutor-6-thread-1) [ x:ckan] o.a.s.h.c.SpellCheckComponent Initializing spell checkers
solr | 13640 INFO (coreLoadExecutor-6-thread-1) [ x:ckan] o.a.s.s.DirectSolrSpellChecker init: {name=default,field=_text_,classname=solr.DirectSolrSpellChecker,distanceMeasure=internal,accuracy=0.5,maxEdits=2,minPrefix=1,maxInspections=5,minQueryLength=4,maxQueryFrequency=0.01}
solr | 13653 INFO (coreLoadExecutor-6-thread-1) [ x:ckan] o.a.s.h.c.SpellCheckComponent No queryConverter defined, using default converter
solr | 13700 INFO (coreLoadExecutor-6-thread-1) [ x:ckan] o.a.s.h.c.QueryElevationComponent Loading QueryElevation from: /opt/solr/server/solr/ckan/conf/elevate.xml
solr | 13914 INFO (coreLoadExecutor-6-thread-1) [ x:ckan] o.a.s.h.ReplicationHandler Commits will be reserved for 10000
solr | 14015 INFO (searcherExecutor-7-thread-1-processing-x:ckan) [ x:ckan] o.a.s.c.QuerySenderListener QuerySenderListener sending requests to Searcher#2cd58256[ckan] main{ExitableDirectoryReader(UninvertingDirectoryReader())}
solr | 14018 INFO (searcherExecutor-7-thread-1-processing-x:ckan) [ x:ckan] o.a.s.c.QuerySenderListener QuerySenderListener done.
solr | 14021 INFO (coreLoadExecutor-6-thread-1) [ x:ckan] o.a.s.u.UpdateLog Looking up max value of version field to seed version buckets
solr | 14023 INFO (coreLoadExecutor-6-thread-1) [ x:ckan] o.a.s.u.VersionInfo Refreshing highest value of _version_ for 65536 version buckets from index
solr | 14026 INFO (coreLoadExecutor-6-thread-1) [ x:ckan] o.a.s.u.VersionInfo No terms found for _version_, cannot seed version bucket highest value from index
solr | 14035 INFO (coreLoadExecutor-6-thread-1) [ x:ckan] o.a.s.u.UpdateLog Could not find max version in index or recent updates, using new clock 1574571440349380608
solr | 14020 INFO (searcherExecutor-7-thread-1-processing-x:ckan) [ x:ckan] o.a.s.h.c.SpellCheckComponent Loading spell index for spellchecker: default
solr | 14075 INFO (searcherExecutor-7-thread-1-processing-x:ckan) [ x:ckan] o.a.s.c.SolrCore [ckan] Registered new searcher Searcher#2cd58256[ckan] main{ExitableDirectoryReader(UninvertingDirectoryReader())}
solr | 14088 INFO (coreLoadExecutor-6-thread-1) [ x:ckan] o.a.s.u.UpdateLog Took 65.0ms to seed version buckets with highest version 1574571440349380608
db | WARNING: there is already a transaction in progress
ckan | 2017-08-01 22:58:55,460 WARNI [ckan.lib.search] Problems were found while connecting to the SOLR server
ckan | Initialising DB: SUCCESS
ckan | *** Running /etc/rc.local...
ckan | *** Booting runit daemon...
ckan | *** Runit started as PID 25
ckan | * Starting Postfix Mail Transport Agent postfix
ckan | AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 172.18.0.4. Set the 'ServerName' directive globally to suppress this message
ckan | ...done.
solr | 14092 INFO (coreLoadExecutor-6-thread-1) [ x:ckan] o.a.s.c.CoreContainer registering core: ckan
solr exited with code 137
db | WARNING: there is no transaction in progress
db exited with code 137
Any suggestions?
Thanks
Use docker-compose which makes it easier to do these things. Below file would help you
version: '2'
services:
db:
image: ckan/postgresql
solr:
image: ckan/solr
ckan:
image: ckan
ports:
- "80:80"
- "5000:5000"
Do a docker-compose up to get this running. Then you can access the ckan at http://<HOSTIPofDocker>:80. If you are not able to access it then make sure to run docker-compose ps to check everything is running and check logs using docker-compose logs -f ckan
The docker installation docs are out of date. I'm currently working on an update to add install docs for Docker Compose in this pull request. Hope this helps!

Resources