Dockerized Phoenix/Elixir App Rejects All HTTP/socket requests - docker

I'm trying to follow along with this tutorial to get my (functioning on localhost) elixir/phoenix app running in a docker
container and I'm running into difficulties.
https://pspdfkit.com/blog/2018/how-to-run-your-phoenix-application-with-docker/
Here is my error:
[info] JOIN "room:lobby" to AlbatrossWeb.RoomChannel
phoenix_1 | Transport: Phoenix.Transports.WebSocket (2.0.0)
phoenix_1 | Serializer: Phoenix.Transports.V2.WebSocketSerializer
phoenix_1 | Parameters: %{}
phoenix_1 | inside room:lobby channel handler
phoenix_1 | [info] Replied room:lobby :ok
phoenix_1 | [error] Ranch protocol #PID<0.403.0> of listener AlbatrossWeb.Endpoint.HTTP (cowboy_protocol) terminated
phoenix_1 | ** (exit) exited in: Phoenix.Endpoint.CowboyWebSocket.resume()
phoenix_1 | ** (EXIT) an exception was raised:
phoenix_1 | ** (Protocol.UndefinedError) got FunctionClauseError with message "no function clause matching in Poison.Encoder.__protocol__/1" while retrieving Exception.message/1 for %Protocol.UndefinedError{description: "", protocol: Poison.Encoder, value: ["127", "127", "room:lobby", "phx_reply", %{response: %{}, status: :ok}]}
phoenix_1 | (poison) lib/poison/encoder.ex:66: Poison.Encoder.impl_for!/1
phoenix_1 | (poison) lib/poison/encoder.ex:69: Poison.Encoder.encode/2
phoenix_1 | (poison) lib/poison.ex:41: Poison.encode!/2
phoenix_1 | (phoenix) lib/phoenix/transports/v2/websocket_serializer.ex:22: Phoenix.Transports.V2.WebSocketSerializer.encode!/1
phoenix_1 | (phoenix) lib/phoenix/transports/websocket.ex:197: Phoenix.Transports.WebSocket.encode_reply/2
phoenix_1 | (phoenix) lib/phoenix/endpoint/cowboy_websocket.ex:77: Phoenix.Endpoint.CowboyWebSocket.websocket_handle/3
phoenix_1 | (cowboy) /app/deps/cowboy/src/cowboy_websocket.erl:588: :cowboy_websocket.handler_call/7
phoenix_1 | (phoenix) lib/phoenix/endpoint/cowboy_websocket.ex:49: Phoenix.Endpoint.CowboyWebSocket.resume/3
phoenix_1 | (cowboy) /app/deps/cowboy/src/cowboy_protocol.erl:442: :cowboy_protocol.execute/4
phoenix_1 | [info] JOIN "room:lobby" to AlbatrossWeb.RoomChannel
<....repeat forever....>
I'm not sure what is going on.
My room lobby is simply a socket channel defined room_channel.ex as:
###room_channel.ex###
defmodule AlbatrossWeb.RoomChannel do
use Phoenix.Channel
def join("room:lobby", _message, socket) do
IO.puts "inside room:lobby channel handler"
{:ok, socket}
end
def join("room:" <> _private_room_id, _params, _socket) do
{:error, %{reason: "unauthorized"}}
end
def handle_in("updated_comments", %{"payload"=>payload}, socket) do
IO.puts("inside updated_comments handle_in")
broadcast! socket, "updated_comments", payload
# ArticleController.retrieve(socket)
{:noreply, socket}
end
end
###room_channel.ex###
It runs fine when I run this without my docker files - what I added is the following:
###run.sh###
docker-compose up --build
###run.sh###
###Dockerfile###
FROM elixir:latest
RUN apt-get update && \
apt-get install -y postgresql-client
# Create app directory and copy the Elixir projects into it
RUN mkdir /app
COPY . /app
WORKDIR /app
# Install hex package manager
RUN mix local.hex --force
# Compile the project
RUN mix do compile
CMD ["/app/entrypoint.sh"]
###Dockerfile###
###docker-compose###
# Version of docker-compose
version: '3'
# Containers we are going to run
services:
# Our Phoenix container
phoenix:
# The build parameters for this container.
build:
# Here we define that it should build from the current directory
context: .
environment:
# Variables to connect to our Postgres server
PGUSER: postgres
PGPASSWORD: postgres
PGDATABASE: db
PGPORT: 5432
# Hostname of our Postgres container
PGHOST: db
ports:
# Mapping the port to make the Phoenix app accessible outside of the container
- "4000:4000"
depends_on:
# The db container needs to be started before we start this container
- db
db:
# We use the predefined Postgres image
image: postgres:9.6
environment:
# Set user/password for Postgres
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
# Set a path where Postgres should store the data
PGDATA: /var/lib/postgresql/data/pgdata
restart: always
volumes:
- pgdata:/var/lib/postgresql/data
# Define the volumes
volumes:
pgdata:
###docker-compose###
###entrypoint.sh###
#!/bin/bash
while ! pg_isready -q -h $PGHOST -p $PGPORT -U $PGUSER
do
echo "$(date) - waiting for database to start"
sleep 2
done
# Create, migrate, and seed database if it doesn't exist.
if [[ -z `psql -Atqc "\\list $PGDATABASE"` ]]; then
echo "Database $PGDATABASE does not exist. Creating..."
createdb -E UTF8 $PGDATABASE -l en_US.UTF-8 -T template0
echo "1"
mix do ecto.drop, ecto.create
echo "2"
mix phx.gen.schema Binarys binary postnum:integer leftchild:integer rightchild:integer downvotes:integer message:string parent:string upvotes:integer
echo "3"
mix phx.gen.schema Comments comment postnum:integer children:map downvotes:integer message:string parent:string upvotes:integer identifier:uuid
echo "4"
mix ecto.migrate
echo "5"
mix run priv/repo/seeds.exs
echo "Database $PGDATABASE created."
fi
exec mix phx.server
###entrypoint.sh###
I also changed the config in my dev.exs file like this:
###dev.exs###
config :albatross, Albatross.Repo,
adapter: Ecto.Adapters.Postgres,
username: "postgres",
password: "postgres",
hostname: "db",
database: "db",
# port: 5432,
pool_size: 10
###dev.exs###
Interestingly all of these errors seem to spawn when my frontend is up, but not making requests (other than connecting to the socket). If I try and make an http request I get this:
phoenix_1 | [info] POST /addComment
phoenix_1 | inside addComment
phoenix_1 | [debug] Processing with AlbatrossWeb.PageController.addComment/2
phoenix_1 | Parameters: %{"payload" => %{"message" => "sf", "parent" => "no_parent", "postnum" => 6, "requestType" => "post", "urlKEY" => "addComment"}}
phoenix_1 | Pipelines: [:browser]
phoenix_1 | [error] Failure while translating Erlang's logger event
phoenix_1 | ** (Protocol.UndefinedError) got FunctionClauseError with message "no function clause matching in Plug.Exception.__protocol__/1" while retrieving Exception.message/1 for %Protocol.UndefinedError{description: "", protocol: Plug.Exception, value: %Protocol.UndefinedError{description: "", protocol: Plug.Exception, value: %Protocol.UndefinedError{description: "", protocol: Plug.Exception, value: %Protocol.UndefinedError{description: "", protocol: String.Chars, value: %Postgrex.Query{columns: nil, name: "", param_formats: nil, param_oids: nil, param_types: nil, ref: nil, result_formats: nil, result_oids: nil, result_types: nil, statement: ["INSERT INTO ", [34, "comment", 34], [], [32, 40, [[[[[[[[[[], [34, "children", 34], 44], [34, "downvotes", 34], 44], [34, "identifier", 34], 44], [34, "message", 34], 44], [34, "parent", 34], 44], [34, "postnum", 34], 44], [34, "upvotes", 34], 44], [34, "inserted_at", 34], 44], 34, "updated_at", 34], ") VALUES ", [], 40, [[[[[[[[[[], [36 | "1"], 44], [36 | "2"], 44], [36 | "3"], 44], [36 | "4"], 44], [36 | "5"], 44], [36 | "6"], 44], [36 | "7"], 44], [36 | "8"], 44], 36 | "9"], 41], [], " RETURNING ", [], 34, "id", 34], types: nil}}}}}
phoenix_1 | (plug) lib/plug/exceptions.ex:4: Plug.Exception.impl_for!/1
phoenix_1 | (plug) lib/plug/exceptions.ex:19: Plug.Exception.status/1
phoenix_1 | (plug) lib/plug/adapters/translator.ex:79: Plug.Adapters.Translator.non_500_exception?/1
phoenix_1 | (plug) lib/plug/adapters/translator.ex:49: Plug.Adapters.Translator.translate_ranch/5
phoenix_1 | (logger) lib/logger/erlang_handler.ex:104: Logger.ErlangHandler.translate/6
phoenix_1 | (logger) lib/logger/erlang_handler.ex:97: Logger.ErlangHandler.translate/5
phoenix_1 | (logger) lib/logger/erlang_handler.ex:30: anonymous fn/3 in Logger.ErlangHandler.log/2
phoenix_1 | (logger) lib/logger.ex:861: Logger.normalize_message/2
phoenix_1 | (logger) lib/logger.ex:684: Logger.__do_log__/3
phoenix_1 | (kernel) logger_backend.erl:51: :logger_backend.call_handlers/3
phoenix_1 | (kernel) logger_backend.erl:38: :logger_backend.log_allowed/2
phoenix_1 | (ranch) /app/deps/ranch/src/ranch_conns_sup.erl:167: :ranch_conns_sup.loop/4
phoenix_1 | (stdlib) proc_lib.erl:249: :proc_lib.init_p_do_apply/3
phoenix_1 |
So you can see that it sees the request and it appears to be manipulating it. It just can't return it. I have my ports exposed in both my Docker and docker-compose files, I really can't see what else could be going wrong as I have this app working when I run it outside the docker containers.
What is going wrong?

I think the problem lies in your Dockerfile.
You didn't expose any port.
To be able to publish port, you need to first expose the post.
Try adding EXPOSE 4000 in your Dockerfile.

Not enough reputation to reply to other answer, but I wanted to inform potential readers that the EXPOSE instruction is nothing but documentation. It is not necessary to expose a port before publishing it.
From the official docker documentation:
The EXPOSE instruction does not actually publish the port. It functions as a type of documentation between the person who builds the image and the person who runs the container, about which ports are intended to be published. To actually publish the port when running the container, use the -p flag on docker run to publish and map one or more ports, or the -P flag to publish all exposed ports and map them to high-order ports.

Sorry to add this as an answer but I am facing some problems while trying to create a docker container for an Elixir/Phoenix app, I am only creating a REST API (no html) typoe of project and it works perfectly locally but when I create the docker image and container and I run the container, the app runs perfectly but when I try from postman I always get an error saying socket hang up which is not so clear, please check below the errors from postman.
picture socket hang up error 1
picture socket hang up error 2
This is my Dockerfile:
FROM elixir:alpine
RUN mkdir /app
COPY . /app
WORKDIR /app
RUN apk update && apk add inotify-tools
RUN mix local.hex --force && mix local.rebar --force
RUN mix do deps.get, deps.compile
EXPOSE 4000
CMD ["mix", "phx.server"]
You have to know that I am trying to be the simplest as I can since I am learning elixir and phoenix yet and I want to keep things clear for me so I don't have any networking or security with docker-compose or any other integration. I just want to run my REST API made in Elixir/Phoenix inside a docker container.

Related

how to setup multiple apps on different subdomains using docker compose and 'SteveLTN/https-portal'?

I am using DigitalOcean's droplet to host the apps.
I have different apps that I want to run on different subdomains (sub1.example.com,sub2.example.com, etc).
So far I managed to run just one of them on the main domain (eg: example.com)
The folder structure looks like this:
-/apps
-/app1Folder
docker-compose.yml
Dockerfile
-/codeFolder
-/app2Folder
docker-compose.yml
Dockerfile
-/codeFolder
Each Dockerfile looks like this (ofc the appname and 5001 port is different for each app):
FROM mcr.microsoft.com/dotnet/aspnet:5.0 AS base
WORKDIR /app
FROM mcr.microsoft.com/dotnet/sdk:5.0 AS build
WORKDIR /src
COPY ["App1.csproj", "."]
RUN dotnet restore "./App1.csproj"
COPY . .
WORKDIR "/src/."
RUN dotnet build "App1.csproj" -c Release -o /app/build
FROM build AS publish
RUN dotnet publish "App1.csproj" -c Release -o /app/publish
FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENV ASPNETCORE_URLS http://+:5001
EXPOSE 5001
ENTRYPOINT ["dotnet", "App1.dll"]
Each docker-compose.yml file has the following structure:
version: '1'
services:
app:
build:
context: ./
dockerfile: Dockerfile
https-portal:
image: steveltn/https-portal:1
ports:
- '80:80'
- '443:443'
links:
- app
restart: always
environment:
DOMAINS: 'sub1.example.com -> http://app:5001'
# for app2 we route to 'sub2.example.com'
# STAGE: 'production'
From what I understood so far, I should have only one docker-compose file that should contain all my configs for both apps that i need to run.
I am not sure how to make the routing work or what should i change to make it work.
I am new to all this and don't exactly know where should i start.
Right now after i ran the first app, and it is working, I get the bellow error when trying to 'docker-compose up' the second one.
I think it has something to do with the fact that I am basically trying to run 2 docker images of https-portal that both expose ports 80 and 443 by default. Even if I change the ports on the second app (app2) it still gives the same error.
I am positive that the DNS is configured corectly and propagated to the host.
Any suggestion is welcomed! Please 'halp'! :)
Signing certificates from https://acme-staging-v02.api.letsencrypt.org/directory ...
https-portal_1 | Parsing account key...
https-portal_1 | Parsing CSR...
https-portal_1 | Found domains: sub2.example.com
https-portal_1 | Getting directory...
https-portal_1 | Directory found!
https-portal_1 | Registering account...
https-portal_1 | Already registered!
https-portal_1 | Creating new order...
https-portal_1 | Order created!
https-portal_1 | Verifying sub2.example.com...
https-portal_1 | Traceback (most recent call last):
https-portal_1 | File "/bin/acme_tiny", line 198, in <module>
https-portal_1 | main(sys.argv[1:])
https-portal_1 | File "/bin/acme_tiny", line 194, in main
https-portal_1 | signed_crt = get_crt(args.account_key, args.csr, args.acme_dir, log=LOGGER, CA=args.ca, disable_check=args.disable_check, directory_url=args.directory_url, contact=args.contact)
https-portal_1 | File "/bin/acme_tiny", line 149, in get_crt
https-portal_1 | raise ValueError("Challenge did not pass for {0}: {1}".format(domain, authorization))
https-portal_1 | ValueError: Challenge did not pass for sub2.example.com: {u'status': u'invalid', u'challenges': [{u'status': u'invalid', u'validationRecord': [{u'url': u'http://sub2.example.com/.well-known/acme-challenge/neHJEUpiAjxhvk1nicoRnDaT_xOAIXMaG8MxJstPy14', u'hostname': u'sub2.example.com', u'addressUsed': u'143.198.249.45', u'port': u'80', u'addressesResolved': [u'IP_ADDRESS']}], u'url': u'https://acme-staging-v02.api.letsencrypt.org/acme/chall-v3/4917620723/B0GD9Q', u'token': u'neHJEUpiAjxhvk1nicoRnDaT_xOAIXMaG8MxJstPy14', u'error': {u'status': 400, u'type': u'urn:ietf:params:acme:error:connection', u'detail': u'143.198.249.45: Fetching http://sub2.example.com/.well-known/acme-challenge/neHJEUpiAjxhvk1nicoRnDaT_xOAIXMaG8MxJstPy14: Error getting validation data'}, u'validated': u'2023-01-11T19:31:34Z', u'type': u'http-01'}], u'identifier': {u'type': u'dns', u'value': u'sub2.example.com'}, u'expires': u'2023-01-18T19:31:33Z'}
https-portal_1 | ================================================================================
https-portal_1 | Failed to sign sub2.example.com.
https-portal_1 | Make sure you DNS is configured correctly and is propagated to this host
https-portal_1 | machine. Sometimes that takes a while.
https-portal_1 | ================================================================================
https-portal_1 | Failed to obtain certs for sub2.example.com
https-portal_1 | [cont-init.d] 20-setup: exited 1.
https-portal_1 | [cont-finish.d] executing container finish scripts...

Getting template error while templating string when trying to work with json and json_query filter

I try to execute this ansible command :
ansible localhost -m debug -a "var={{ db_config| to_json |json_query(*)}}" \
-e 'db_config=[{"name":"mydb1","services":["app1","app2"]},{"name":"mydb12","services":["app1"]}]'
To get all json elements using json_query(*)
but I'm getting the following error :
fatal: [localhost]: FAILED! =>
msg: 'template error while templating string: expected token ''end of print statement'', got ''name''. String: {{"[{\\"name\\":\\"mydb1\\",\\"services\\":[\\"app1\\",\\"app2\\"]},{\\"name\\":\\"mydb12\\",\\"services\\":[\\"app1\\"]}]"}}'
There are several issues in this single one liner:
The parameter to use for debug in your case is msg (to output the result of a template or static value) and not var (used to debug a single var without any transformation). See the debug module documentation
You are passing as a parameter a json string representation. The filter you want to use is from_json (to transform your string to a dict/list) and not to_json (which does the exact opposite).
Your jmespath expression in json_query is invalid for two reason:
It's not a string (just a bare wildcard caracter)
Even quoted it would not respect the specification (follow the above link).
The following fixed one liner gives the result (I believe...) you expect:
$ ansible localhost -m debug -a 'msg={{ db_config | from_json | json_query("[]") }}' \
-e 'db_config=[{"name":"mydb1","services":["app1","app2"]},{"name":"mydb12","services":["app1"]}]'
localhost | SUCCESS => {
"msg": [
{
"name": "mydb1",
"services": [
"app1",
"app2"
]
},
{
"name": "mydb12",
"services": [
"app1"
]
}
]
}
Note that you can even drop the json decoding step by passing your arguement as a full inlined json:
$ ansible localhost -m debug -a 'msg={{ db_config | json_query("[]") }}' \
-e '{"db_config":[{"name":"mydb1","services":["app1","app2"]},{"name":"mydb12","services":["app1"]}]}'
localhost | SUCCESS => {
"msg": [
{
"name": "mydb1",
"services": [
"app1",
"app2"
]
},
{
"name": "mydb12",
"services": [
"app1"
]
}
]
}
I suspect you are trying all this to later test json_query expressions to select your data.
You should be aware there a lot of other core filters in ansible (among which selectattr, rejectattr, select, reject, map, dict2items, items2dict, first, last, unique, zip, subelements,...) that combined together can do most of the job json_query can do without its overhead (installing a python module and an ansible collection). json_query is only really needed for very complex queries.
In case I am wrong and your final goal is really to display all elements of your input data, you can overly simplify. From the two examples above:
$ ansible localhost -m debug -a 'msg={{ db_config | from_json }}' \
-e 'db_config=[{"name":"mydb1","services":["app1","app2"]},{"name":"mydb12","services":["app1"]}]'
# OR (the next one can work with `var` for debug)
$ ansible localhost -m debug -a 'var=db_config' \
-e '{"db_config":[{"name":"mydb1","services":["app1","app2"]},{"name":"mydb12","services":["app1"]}]}'
If you put the data into a file
shell> cat test-data.yml
---
db_config:
- {name: "mydb1", services: ["app1", "app2"]}
- {name: "mydb12", services: ["app1"]}
and if you put the code into the playbook
shell> cat test.yml
---
- hosts: localhost
tasks:
- debug:
msg: "{{ db_config|json_query('[]') }}"
The command works as expected
shell> ansible-playbook test.yml -e #test-data.yml
...
msg:
- name: mydb1
services:
- app1
- app2
- name: mydb12
services:
- app1

docker: vimagick/stunnel ==> /entrypoint.sh: line 21: openssl: not found

I am working on a ubuntu 18.04.4 LTS VM, where I have docker and docker-compose installed.
I am using a vimagick / stunnel image to build a tunnel against a client for quickFix services.
Problem: In a new installation, when I raise the docker-compose file, throw the following error:
tunnel_primary_1 | chmod: stunnel.pem: No such file or directory
tunnel_primary_1 | [ ] Clients allowed=512000
tunnel_primary_1 | [.] stunnel 5.56 on x86_64-alpine-linux-musl platform
tunnel_primary_1 | [.] Compiled/running with OpenSSL 1.1.1d 10 Sep 2019
tunnel_primary_1 | [.] Threading:PTHREAD Sockets:POLL,IPv6 TLS:ENGINE,OCSP,PSK,SNI
tunnel_primary_1 | [ ] errno: (*__errno_location())
tunnel_primary_1 | [.] Reading configuration from file /etc/stunnel/stunnel.conf
tunnel_primary_1 | [.] UTF-8 byte order mark not detected
tunnel_primary_1 | [ ] No PRNG seeding was required
tunnel_primary_1 | [ ] Initializing service [quickfix]
tunnel_primary_1 | [ ] Ciphers: HIGH:!aNULL:!SSLv2:!DH:!kDHEPSK
tunnel_primary_1 | [ ] TLSv1.3 ciphersuites: TLS_CHACHA20_POLY1305_SHA256:TLS_AES_256_GCM_SHA384:TLS_AES_128_GCM_SHA256
tunnel_primary_1 | [ ] TLS options: 0x02100004 (+0x00000000, -0x00000000)
tunnel_primary_1 | [ ] Loading certificate from file: /etc/stunnel/stunnel.pem
tunnel_primary_1 | [!] error queue: ssl/ssl_rsa.c:615: error:140DC002:SSL routines:use_certificate_chain_file:system lib
tunnel_primary_1 | [!] error queue: crypto/bio/bss_file.c:290: error:20074002:BIO routines:file_ctrl:system lib
tunnel_primary_1 | [!] SSL_CTX_use_certificate_chain_file: crypto/bio/bss_file.c:288: error:02001002:system library:fopen:No such file or directory
tunnel_primary_1 | [!] Service [quickfix]: Failed to initialize TLS context
tunnel_primary_1 | [ ] Deallocating section defaults
prueba1_tunnel_primary_1 exited with code 1
This is mi docker-compose.yml:
version: '3'
services:
tunnel_primary:
image: vimagick/stunnel
ports:
- "6789:6789"
environment:
- CLIENT=yes
- SERVICE=quickfix
- ACCEPT=0.0.0.0:6789
- CONNECT=11.11.11.11:1234
logging:
driver: "json-file"
options:
max-size: "1024k"
max-file: "10"
In the VM that is in production it works and there is no installation dif. Yes, the image of docker vimagick / stunnel that I use in production is 7 months ago
Thank!!!!!
This docker image is broken since they switched to libressl (without updating their launch script that still uses openssl).
There is a pull request fixing this issue that will (hopefully) be merged.
In the meantime you can fork the repo containing the docker file and modify dockerfiles/stunnel/docker-entrypoint.sh by replacing openssl to libressl.
I ended up recreating a new image on docker hub, use prokofyevdmitry/stunnel instead of vimagik/stunnel inside your docker-compose.yml file

cannot connect to redis container from app container

I follow the django channels tutorial to build a simple chat app.
https://channels.readthedocs.io/en/latest/tutorial/part_1.html
,
it can work on my local machine with redis without docker.
then now i want to put it into docker containers by docker compose file, but seems app cannot connect to redis container, i already try and google for two days, but seems all methods cannot work.
so want to ask for help here
folder structure
my_project
- mysite(django app)
- ... somefolder and files
- docker-compose.yml
- Dockfile
- requirements.txt
docker-compose.yml
version: '3'
services:
app:
build:
# current directory
context: .
ports:
#host to image
- "8000:8000"
volumes:
# map directory to image, which means if something changed in
# current directory, it will automatically reflect on image,
# don't need to restart docker to get the changes into effect
- ./mysite:/mysiteRoot/mysite
command: >
sh -c "
python3 manage.py makemigrations &&
python3 manage.py migrate &&
python3 manage.py runserver 0.0.0.0:8000"
depends_on:
- redis
redis:
image: redis:5.0.5-alpine
ports:
#host to image
- "6379:6379"
Dockfile
FROM python:3.7-alpine
MAINTAINER Aaron Wei.
ENV PYTHONUNBUFFERED 1
EXPOSE 8000
# Setup directory structure
RUN mkdir /mysiteRoot
WORKDIR /mysiteRoot/mysite/
# Install dependencies
COPY ./requirements.txt /mysiteRoot/requirements.txt
RUN apk add --update --no-cache postgresql-client
RUN apk add --update --no-cache --virtual .tmp-build-deps \
gcc libc-dev linux-headers postgresql-dev
RUN apk add build-base python-dev py-pip jpeg-dev zlib-dev
ENV LIBRARY_PATH=/lib:/usr/lib
RUN pip install -r /mysiteRoot/requirements.txt
RUN apk del .tmp-build-deps
# Copy application
COPY ./mysite/ /mysiteRoot/mysite/
RUN adduser -D user
USER user
Django settings file
"""
Django settings for mysite project.
Generated by 'django-admin startproject' using Django 2.2.2.
For more information on this file, see
https://docs.djangoproject.com/en/2.2/topics/settings/
For the full list of settings and their values, see
https://docs.djangoproject.com/en/2.2/ref/settings/
"""
import os
# Build paths inside the project like this: os.path.join(BASE_DIR, ...)
BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
# Quick-start development settings - unsuitable for production
# See https://docs.djangoproject.com/en/2.2/howto/deployment/checklist/
# SECURITY WARNING: keep the secret key used in production secret!
SECRET_KEY = 's0d^&2#s^126#6dsm7u4-t9pg03)if$dq##xxouht)#%#=o)r0'
# SECURITY WARNING: don't run with debug turned on in production!
DEBUG = True
ALLOWED_HOSTS = ['0.0.0.0']
# Application definition
INSTALLED_APPS = [
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'channels',
'chat',
]
MIDDLEWARE = [
'django.middleware.security.SecurityMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
]
ROOT_URLCONF = 'mysite.urls'
TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': [],
'APP_DIRS': True,
'OPTIONS': {
'context_processors': [
'django.template.context_processors.debug',
'django.template.context_processors.request',
'django.contrib.auth.context_processors.auth',
'django.contrib.messages.context_processors.messages',
],
},
},
]
WSGI_APPLICATION = 'mysite.wsgi.application'
ASGI_APPLICATION = 'mysite.routing.application'
CHANNEL_LAYERS = {
'default': {
'BACKEND': 'channels_redis.core.RedisChannelLayer',
'CONFIG': {
"hosts": [('0.0.0.0', 6379)],
},
},
}
# Database
# https://docs.djangoproject.com/en/2.2/ref/settings/#databases
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3',
'NAME': os.path.join(BASE_DIR, 'db.sqlite3'),
}
}
# Password validation
# https://docs.djangoproject.com/en/2.2/ref/settings/#auth-password-validators
AUTH_PASSWORD_VALIDATORS = [
{
'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.MinimumLengthValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.CommonPasswordValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.NumericPasswordValidator',
},
]
# Internationalization
# https://docs.djangoproject.com/en/2.2/topics/i18n/
LANGUAGE_CODE = 'en-us'
TIME_ZONE = 'UTC'
USE_I18N = True
USE_L10N = True
USE_TZ = True
# Static files (CSS, JavaScript, Images)
# https://docs.djangoproject.com/en/2.2/howto/static-files/
STATIC_URL = '/static/'
docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
7966fe4962f7 tourmama_copy_app "sh -c '\n pyth…" 21 minutes ago Up 20 minutes 0.0.0.0:8000->8000/tcp tourmama_copy_app_1
61446d04c4b2 redis:5.0.5-alpine "docker-entrypoint.s…" 27 minutes ago Up 20 minutes 0.0.0.0:6379->6379/tcp tourmama_copy_redis_1
Then the error in console shows :
app_1 | HTTP GET /chat/dadas/ 200 [0.01, 172.19.0.1:39468]
app_1 | WebSocket HANDSHAKING /ws/chat/dadas/ [172.19.0.1:39470]
app_1 | Exception inside application: [Errno 111] Connect call failed ('0.0.0.0', 6379)
app_1 | File "/usr/local/lib/python3.7/site-packages/channels/sessions.py", line 179, in __call__
app_1 | return await self.inner(receive, self.send)
app_1 | File "/usr/local/lib/python3.7/site-packages/channels/middleware.py", line 41, in coroutine_call
app_1 | await inner_instance(receive, send)
app_1 | File "/usr/local/lib/python3.7/site-packages/channels/consumer.py", line 59, in __call__
app_1 | [receive, self.channel_receive], self.dispatch
app_1 | File "/usr/local/lib/python3.7/site-packages/channels/utils.py", line 59, in await_many_dispatch
app_1 | await task
app_1 | File "/usr/local/lib/python3.7/site-packages/channels_redis/core.py", line 425, in receive
app_1 | real_channel
app_1 | File "/usr/local/lib/python3.7/site-packages/channels_redis/core.py", line 477, in receive_single
app_1 | index, channel_key, timeout=self.brpop_timeout
app_1 | File "/usr/local/lib/python3.7/site-packages/channels_redis/core.py", line 324, in _brpop_with_clean
app_1 | async with self.connection(index) as connection:
app_1 | File "/usr/local/lib/python3.7/site-packages/channels_redis/core.py", line 813, in __aenter__
app_1 | self.conn = await self.pool.pop()
app_1 | File "/usr/local/lib/python3.7/site-packages/channels_redis/core.py", line 70, in pop
app_1 | conns.append(await aioredis.create_redis(**self.host, loop=loop))
app_1 | File "/usr/local/lib/python3.7/site-packages/aioredis/commands/__init__.py", line 178, in create_redis
app_1 | loop=loop)
app_1 | File "/usr/local/lib/python3.7/site-packages/aioredis/connection.py", line 108, in create_connection
app_1 | timeout, loop=loop)
app_1 | File "/usr/local/lib/python3.7/asyncio/tasks.py", line 388, in wait_for
app_1 | return await fut
app_1 | File "/usr/local/lib/python3.7/site-packages/aioredis/stream.py", line 19, in open_connection
app_1 | lambda: protocol, host, port, **kwds)
app_1 | File "/usr/local/lib/python3.7/asyncio/base_events.py", line 959, in create_connection
app_1 | raise exceptions[0]
app_1 | File "/usr/local/lib/python3.7/asyncio/base_events.py", line 946, in create_connection
app_1 | await self.sock_connect(sock, address)
app_1 | File "/usr/local/lib/python3.7/asyncio/selector_events.py", line 464, in sock_connect
app_1 | return await fut
app_1 | File "/usr/local/lib/python3.7/asyncio/selector_events.py", line 494, in _sock_connect_cb
app_1 | raise OSError(err, f'Connect call failed {address}')
app_1 | [Errno 111] Connect call failed ('0.0.0.0', 6379)
app_1 | WebSocket DISCONNECT /ws/chat/dadas/ [172.19.0.1:39470]
is anyone has any idea how to connect to redis container from app?
thank you !
You should change :
CHANNEL_LAYERS = {
'default': {
'BACKEND': 'channels_redis.core.RedisChannelLayer',
'CONFIG': {
"hosts": [('0.0.0.0', 6379)],
},
},
}
to
CHANNEL_LAYERS = {
'default': {
'BACKEND': 'channels_redis.core.RedisChannelLayer',
'CONFIG': {
"hosts": [('redis', 6379)],
},
},
}
in your Django settings file.
When you set up containers from compose they are all connected to the default network created by compose. redis is in this case the DNS name of redis container and will be resolved to container ip automatically

Unable to connect to Postgres after docker-compose up.

I am attempting to Dockerize a rails app using a number of online tutorials. I've reached the point where I can successfully fire up a docker container using docker-compose up. But once after that point, I have trouble connecting to my database. The following is my docker-compose up output:
docker-compose up
Pulling redis (redis:latest)...
latest: Pulling from library/redis
75a822cd7888: Pull complete
e40c2fafe648: Pull complete
ce384d4aea4f: Pull complete
5e29dd684b84: Pull complete
29a3c975c335: Pull complete
a405554540f9: Pull complete
4b2454731fda: Pull complete
Digest: sha256:eed4da4937cb562e9005f3c66eb8c3abc14bb95ad497c03dc89d66bcd172fc7f
Status: Downloaded newer image for redis:latest
Pulling postgres (postgres:9.5.4)...
9.5.4: Pulling from library/postgres
43c265008fae: Pull complete
215df7ad1b9b: Pull complete
833a4a9c3573: Pull complete
e5716357a052: Pull complete
6552dfce18a3: Pull complete
b75b371d1e9f: Pull complete
ecc63fd465b8: Pull complete
8eb11995a95a: Pull complete
9c82fb17fc44: Pull complete
389787480cc2: Pull complete
01988d09a399: Pull complete
Digest: sha256:1480f2446dabb1116fafa426ac530d2404277873a84dc4a4d0d9d4b37a5601e8
Status: Downloaded newer image for postgres:9.5.4
Creating redis
Creating postgres
Attaching to postgres, redis
postgres | The files belonging to this database system will be owned by user "postgres".
postgres | This user must also own the server process.
postgres |
redis | 1:C 02 Jan 21:08:36.583 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf
redis | _._
postgres | The database cluster will be initialized with locale "en_US.utf8".
redis | _.-``__ ''-._
postgres | The default database encoding has accordingly been set to "UTF8".
redis | _.-`` `. `_. ''-._ Redis 3.2.6 (00000000/0) 64 bit
postgres | The default text search configuration will be set to "english".
redis | .-`` .-```. ```\/ _.,_ ''-._
redis | ( ' , .-` | `, ) Running in standalone mode
postgres |
redis | |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379
postgres | Data page checksums are disabled.
redis | | `-._ `._ / _.-' | PID: 1
postgres |
postgres | fixing permissions on existing directory /var/lib/postgresql/data ... ok
redis | `-._ `-._ `-./ _.-' _.-'
redis | |`-._`-._ `-.__.-' _.-'_.-'|
postgres | creating subdirectories ... ok
redis | | `-._`-._ _.-'_.-' | http://redis.io
redis | `-._ `-._`-.__.-'_.-' _.-'
redis | |`-._`-._ `-.__.-' _.-'_.-'|
redis | | `-._`-._ _.-'_.-' |
redis | `-._ `-._`-.__.-'_.-' _.-'
postgres | selecting default max_connections ... 100
redis | `-._ `-.__.-' _.-'
redis | `-._ _.-'
redis | `-.__.-'
redis |
redis | 1:M 02 Jan 21:08:36.584 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
postgres | selecting default shared_buffers ... 128MB
postgres | selecting dynamic shared memory implementation ... posix
redis | 1:M 02 Jan 21:08:36.584 # Server started, Redis version 3.2.6
redis | 1:M 02 Jan 21:08:36.584 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.
redis | 1:M 02 Jan 21:08:36.584 * The server is now ready to accept connections on port 6379
postgres | creating configuration files ... ok
postgres | creating template1 database in /var/lib/postgresql/data/base/1 ... ok
postgres | initializing pg_authid ... ok
postgres | initializing dependencies ... ok
postgres | creating system views ... ok
postgres | loading system objects' descriptions ... ok
postgres | creating collations ... ok
postgres | creating conversions ... ok
postgres | creating dictionaries ... ok
postgres | setting privileges on built-in objects ... ok
postgres | creating information schema ... ok
postgres | loading PL/pgSQL server-side language ... ok
postgres | vacuuming database template1 ... ok
postgres | copying template1 to template0 ... ok
postgres | copying template1 to postgres ... ok
postgres | syncing data to disk ... ok
postgres |
postgres | Success. You can now start the database server using:
postgres |
postgres | pg_ctl -D /var/lib/postgresql/data -l logfile start
postgres |
postgres |
postgres | WARNING: enabling "trust" authentication for local connections
postgres | You can change this by editing pg_hba.conf or using the option -A, or
postgres | --auth-local and --auth-host, the next time you run initdb.
postgres | ****************************************************
postgres | WARNING: No password has been set for the database.
postgres | This will allow anyone with access to the
postgres | Postgres port to access your database. In
postgres | Docker's default configuration, this is
postgres | effectively any other container on the same
postgres | system.
postgres |
postgres | Use "-e POSTGRES_PASSWORD=password" to set
postgres | it in "docker run".
postgres | ****************************************************
postgres | waiting for server to start....LOG: database system was shut down at 2017-01-02 21:08:37 UTC
postgres | LOG: MultiXact member wraparound protections are now enabled
postgres | LOG: database system is ready to accept connections
postgres | LOG: autovacuum launcher started
postgres | done
postgres | server started
postgres | CREATE DATABASE
postgres |
postgres | ALTER ROLE
postgres |
postgres |
postgres | /docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/*
postgres |
postgres | LOG: received fast shutdown request
postgres | waiting for server to shut down...LOG: aborting any active transactions
postgres | .LOG: autovacuum launcher shutting down
postgres | LOG: shutting down
postgres | LOG: database system is shut down
postgres | done
postgres | server stopped
postgres |
postgres | PostgreSQL init process complete; ready for start up.
postgres |
postgres | LOG: database system was shut down at 2017-01-02 21:08:39 UTC
postgres | LOG: MultiXact member wraparound protections are now enabled
postgres | LOG: database system is ready to accept connections
postgres | LOG: autovacuum launcher started
postgres | FATAL: role "boguthrie" does not exist
postgres | FATAL: role "boguthrie" does not exist
postgres | FATAL: role "user" does not exist
You can see there in the final outputs that I have tried a number of different user roles in my database.yml that I know exist (e.g. when I use the Postgres app I can successfully access by db using those roles). When I try to take a look at my running databases with psql <dbname> or psql -U user -d <dbname> -h localhost I get the following error.
psql: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/tmp/.s.PGSQL.5432"?
Finally, here are potentially relevant files.
database.yml
# PostgreSQL. Versions 8.2 and up are supported.
#
# Install the pg driver:
# gem install pg
# On OS X with Homebrew:
# gem install pg -- --with-pg-config=/usr/local/bin/pg_config
# On OS X with MacPorts:
# gem install pg -- --with-pg-config=/opt/local/lib/postgresql84/bin/pg_config
# On Windows:
# gem install pg
# Choose the win32 build.
# Install PostgreSQL and put its /bin directory on your path.
#
# Configure Using Gemfile
# gem 'pg'
#
default: &default
adapter: postgresql
encoding: unicode
# For details on connection pooling, see rails configuration guide
# http://guides.rubyonrails.org/configuring.html#database-pooling
pool: 5
database: example
# The specified database role being used to connect to postgres.
# To create additional roles in postgres see `$ createuser --help`.
# When left blank, postgres will use the default role. This is
# the same name as the operating system user that initialized the database.
username: boguthrie
# The password associated with the postgres role (username).
password:
# Connect on a TCP socket. Omitted by default since the client uses a
# domain socket that doesn't need configuration. Windows does not have
# domain sockets, so uncomment these lines.
host: localhost
# The TCP port the server listens on. Defaults to 5432.
# If your server runs on a different port number, change accordingly.
port: 5432
# Schema search path. The server defaults to $user,public
#schema_search_path: myapp,sharedapp,public
# Minimum log levels, in increasing order:
# debug5, debug4, debug3, debug2, debug1,
# log, notice, warning, error, fatal, and panic
# Defaults to warning.
#min_messages: notice
development:
<<: *default
# Warning: The database defined as "test" will be erased and
# re-generated from your development database when you run "rake".
# Do not set this db to the same as development or production.
test:
<<: *default
database: example_test
docker-compose.yml
version: '2'
services:
postgres:
container_name: a
image: postgres:9.5.4
environment:
POSTGRES_PASSWORD:
POSTGRES_USER:
POSTGRES_DB: example
ports:
- "5432:5432"
redis:
container_name: redis
image: redis
ports:
- "6379:6379"
Dockerfile
# The following are in the Dockerfile instructions
# The first non-comment instruction must be `FROM` in order to specify the Base Image from which you are building.
# 'FROM' can appear multiple times within a single Dockerfile in order to create multiple images.
# Simply make a note of the last image ID output by the commit before each new FROM command.
FROM ruby:2.3
MAINTAINER Bo
# The LABEL instruction adds metadata to an image.
# A LABEL is a key-value pair.
# To include spaces within a LABEL value, use quotes and backslashes as you would in command-line parsing.
# User docker inspect command to see labels.
LABEL version="0.1"
LABEL description="Example App"
# 'RUN' has two forms:
# The shell form or the executable form. All of the run commands in this file are in the shell form.
# This will throw errors if Gemfile has been modified since Gemfile.lock
RUN bundle config --global frozen 1
# Here we're creating the directory /usr/src/app and using it as or working directory.
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
RUN apt-get update && apt-get install -y nodejs --no-install-recommends && rm -rf /var/lib/apt/lists/*
RUN apt-get update && apt-get install -y mysql-client postgresql-client sqlite3 --no-install-recommends && rm -rf /var/lib/apt/lists/*
RUN apt-get update && apt-get install -y imagemagick --no-install-recommends && rm -rf /var/lib/apt/lists/*
RUN apt-get update && apt-get install -y graphviz --no-install-recommends && rm -rf /var/lib/apt/lists/*
# The COPY instruction copies new files or directories from <src> and adds them to the filesystem of the container at the path <dest>.
COPY Gemfile /usr/src/app/
COPY Gemfile.lock /usr/src/app/
RUN bundle install
COPY . /usr/src/app
# The EXPOSE instruction informs Docker that the container listens on the specified network ports at runtime.
# EXPOSE does not make the ports of the container accessible to the host.
EXPOSE 3000
# The main purpose of a CMD is to provide defaults for an executing container.
# These defaults can include an executable, or they can omit the executable, in which case you must specify an ENTRYPOINT instruction as well.
# There can only be one CMD instruction in a Dockerfile. If you list more than one CMD then only the last CMD will take effect.
# Example common usage: CMD ["rails", "server", "-b", "0.0.0.0", "-P", "/tmp/server.pid"]. This will store the pid in a location not persisted between boots
# Define the script we want run once the container boots
# Use the "exec" form of CMD so our script shuts down gracefully on SIGTERM (i.e. `docker stop`)
CMD [ "config/containers/app_cmd.sh" ]
Any help here would be appreciated. Thanks for your time.
Your role does not exist. This is due to POSTGRES_USER not being set in your docker-compose.yml file. If you set that value and recreate the container it will be created. POSTGRES_USER needs to match the user in the database.yml file for rails.

Resources