After Installing Channels Daphne throws 500 Internal Server Error - django-channels

I am trying out django channels 2.x. After installing and configuring channels. When I execute the runserver command. I get the following error
Traceback (most recent call last): File
"C:\Subbu\Episkope\Lessons\Django\WsChat\env\lib\site-packages\daphne\http_protocol.py",
line 179, in process
"server": self.server_addr, TypeError: call() got an unexpected keyword argument 'receive'
My settings file:
"""
Django settings for WsChatProj project.
Generated by 'django-admin startproject' using Django 2.2.
For more information on this file, see
https://docs.djangoproject.com/en/2.2/topics/settings/
For the full list of settings and their values, see
https://docs.djangoproject.com/en/2.2/ref/settings/
"""
import os
# Build paths inside the project like this: os.path.join(BASE_DIR, ...)
BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
# Quick-start development settings - unsuitable for production
# See https://docs.djangoproject.com/en/2.2/howto/deployment/checklist/
# SECURITY WARNING: keep the secret key used in production secret!
SECRET_KEY = 'kl$#^tt6qnt8ww^rrhmj&&(l0&as4w-#fkb&a0e#^#&vz3*t)a'
# SECURITY WARNING: don't run with debug turned on in production!
DEBUG = True
ALLOWED_HOSTS = []
# Application definition
INSTALLED_APPS = [
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'channels',
'WsChatApp',
]
MIDDLEWARE = [
'django.middleware.security.SecurityMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
]
ROOT_URLCONF = 'WsChatProj.urls'
STATIC_URL = '/static/'
# Extra places for collectstatic to find static files.
STATICFILES_DIRS = [
os.path.join(BASE_DIR, 'static'),
]
TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': [],
'APP_DIRS': True,
'OPTIONS': {
'context_processors': [
'django.template.context_processors.debug',
'django.template.context_processors.request',
'django.contrib.auth.context_processors.auth',
'django.contrib.messages.context_processors.messages',
],
},
},
]
WSGI_APPLICATION = 'WsChatProj.wsgi.application'
# Database
# https://docs.djangoproject.com/en/2.2/ref/settings/#databases
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3',
'NAME': os.path.join(BASE_DIR, 'db.sqlite3'),
}
}
# Password validation
# https://docs.djangoproject.com/en/2.2/ref/settings/#auth-password-validators
AUTH_PASSWORD_VALIDATORS = [
{
'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.MinimumLengthValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.CommonPasswordValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.NumericPasswordValidator',
},
]
# Internationalization
# https://docs.djangoproject.com/en/2.2/topics/i18n/
LANGUAGE_CODE = 'en-us'
TIME_ZONE = 'UTC'
USE_I18N = True
USE_L10N = True
USE_TZ = True
# Static files (CSS, JavaScript, Images)
# https://docs.djangoproject.com/en/2.2/howto/static-files/
STATIC_URL = '/static/'
ASGI_APPLICATION = 'WsChatApp.routing.application'
My asgi.py
import os
import django
from channels.routing import get_default_application
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "WsChatApp.settings")
django.setup()
application = get_default_application()
My routing.py
from django.conf.urls import url
from django.urls import path
from channels.routing import ProtocolTypeRouter, URLRouter
from channels.auth import AuthMiddlewareStack
from channels.security.websocket import AllowedHostsOriginValidator, OriginValidator
# from WsChatApp.consumers import ChatConsumer
application = ProtocolTypeRouter
({
})
I have commented out the consumers.py completely just to first solve this error so there are no consumers at the moment
Library Versions
Channels 2.2.0
asgiref 2.2.0
redis 2.10.6
asgi-redis 1.4.3
The other question I have is, is it not absolutely necessary to have a channel layer? because in the tutorial I am following they are setting no channel info in the settings.py. Does the latest channels library manage this under the hood?

You need to make changes in your asgi.py
Make this as your asgi.py
"""
ASGI config for proj project.
It exposes the ASGI callable as a module-level variable named ``application``.
For more information on this file, see
https://docs.djangoproject.com/en/3.0/howto/deployment/asgi/
"""
import os
from django.core.asgi import get_asgi_application
import django
from channels.http import AsgiHandler
from channels.routing import ProtocolTypeRouter, URLRouter
from channels.auth import AuthMiddlewareStack
from django.urls import path
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'proj.settings')
application = get_asgi_application()
ws_pattern = []
application = ProtocolTypeRouter({
"websocket" : AuthMiddlewareStack(URLRouter(
ws_pattern
))
})

Related

Antd_dayjs_vite_plugin : TypeError: (0 , import_antd_dayjs_vite_plugin.default) is not a function

At the beginning I got a problem with the french date in the antd calendar. I use vite so I install the antd_dayjs_vite_plugin to switch from Moment.js to Day.js. It worked well but this morning the vite build process is in error. I tried to update the antd_dayjs_vite_plugin version (was 1.1.4) and now I got the same problem when I try to lunch a yarn dev as you can see :
$ yarn dev
yarn run v1.22.15
$ vite
failed to load config from vite.config.ts
error when starting dev server:
TypeError: (0 , import_antd_dayjs_vite_plugin.default) is not a function [...]
Here is the code in vite.config.ts :
import reactRefresh from '#vitejs/plugin-react-refresh';
import antdDayjs from 'antd-dayjs-vite-plugin';
import { defineConfig } from 'vite';
// https://vitejs.dev/config/
export default defineConfig({
plugins: [reactRefresh(), antdDayjs()],
server: {
host: process.env.HOST || '127.0.0.1',
},
resolve: {
alias: [{ find: '#', replacement: '/src' }],
},
define: {
__APP_VERSION__: JSON.stringify(process.env.npm_package_version),
},
build: {
commonjsOptions: {
transformMixedEsModules: true,
},
},
});
The problem also appears in antd-dayjs-vite-plugin 1.1.4 version or the 1.2.2. I also already tried to update vite to 3.1 (was in 2.5).
I don't understand the code seems to be exactly the same as the usage in the Readme package.
Thanks in advance for your help. 🙏🏻
Seams that a default export is expected by vite.js (tried to replace import statement with import {antdDayjs} from 'antd-dayjs-vite-plugin'; without success)
I was able to create a workaround using patch-package with the below steps:
modifiy node_modules/antd-dayjs-vite-plugin/dist-node/index.js
at the very end of that file, add exports.default = antdDayjs;
create a patch for antd-dayjs-vite-plugin
ensure you have the postinstall script (refer to patch-package doc)

Q: How to ensure vendor chunk hash doesn't change with webpacker?

I have a Rails 6 project with webpacker 4.2.2 configured to split vendor chunks into individual files:
# config/webpack/environment.js
const { environment } = require('#rails/webpacker')
const webpack = require('webpack')
environment.config.merge({
plugins: [
new webpack.HashedModuleIdsPlugin(),
],
optimization: {
minimize: true,
runtimeChunk: 'single',
splitChunks: {
chunks: 'all',
maxInitialRequests: Infinity,
minSize: 0,
cacheGroups: {
// #see https://hackernoon.com/the-100-correct-way-to-split-your-chunks-with-webpack-f8a9df5b7758
vendor: {
test: /[\\/]node_modules[\\/]/,
name(module) {
const packageName = module.context.match(/[\\/]node_modules[\\/](.*?)([\\/]|$)/)[1];
return `npm.${packageName.replace('#', '')}`;
},
priority: 10,
},
}
}
}
})
module.exports = environment
When we precompile our assets, this produces fingerprinted files for each NPM dependency, which we upload for long-term caching and CDN-based distribution.
However, we're noticing that when we add a new library to the pack, this unexpectedly causes a rehash of many chunk files for dependencies that have not changed at all.
For example, this change in my app/javascript/packs/application.js:
require("#rails/ujs").start()
require("turbolinks").start()
require("#rails/activestorage").start()
require("channels")
import 'msr'
import copy from 'clipboard-copy'
+import axios from 'axios'
will produce the following change in my output chunks (produced from running bin/rails webpacker:compile):
--- a 2020-07-06 18:39:52.202440803 +0000
+++ b 2020-07-06 18:39:52.210440748 +0000
## -1,6 +1,8 ##
-application-1e8721172ae65f57286b.chunk.js
-npm.clipboard-copy-10b42ffbc97b4e927071.chunk.js
-npm.msr-01ea266e2c932167f10b.chunk.js
-npm.rails-a4564cfc542024efeb95.chunk.js
-npm.turbolinks-eeef46ff44962af9ac87.chunk.js
-npm.webpack-7226f5cf46a8c4e61c26.chunk.js
+application-bad0ed20808541f88894.chunk.js
+npm.axios-40b4b54ebace2b9e3907.chunk.js
+npm.clipboard-copy-79d2051f48603e0267e0.chunk.js
+npm.msr-f5a4252b7a7e0a94157f.chunk.js
+npm.process-cfe824ecbab5abe0eecc.chunk.js
+npm.rails-aa1c430d6ceee3ca6bd6.chunk.js
+npm.turbolinks-e28554dbfd4b75aa12e5.chunk.js
+npm.webpack-35f718d9a20b8bca2927.chunk.js
This is a double whammy because of unnecessary cache invalidation and additional CDN transfer costs.
My question is, is there a way to ensure the vendor chunk doesn't get rehashed because of dependency changes?
I don't know if this is a limitation with the way that webpack's SplitChunksPlugin works, but any advice is appreciated.
By the way, I've prepared a minimal Rails project that reproduces the situation I've described above: https://github.com/alextsui05/webpacker-vendor-chunks
A detailed summary is included in the README on the repository, and I invite any answerers to discuss based on that code.
Try setting the option moduleIds: 'hashed'
https://v4.webpack.js.org/configuration/optimization/#optimizationmoduleids

AWS IoT: Use MQTT on port 443

I am trying to setup AWS IoT in Pi on port 443 using Paho MQTT .
As AWS document (https://docs.aws.amazon.com/iot/latest/developerguide/protocols.html) mentioned that
Clients wishing to connect using MQTT with X.509 Client Certificate authentication on port 443 must implement the Application Layer Protocol Negotiation (ALPN) TLS extension and pass x-amzn-mqtt-ca as the ProtocolName in the ProtocolNameList.
I actually don't know how to achieve it properly in Paho MQTT (https://github.com/eclipse/paho.mqtt.python)
What I tried to do (mqtt_apln.py)
import sys
import ssl
import time
import datetime
import logging, traceback
import paho.mqtt.client as mqtt
MQTT_TOPIC = "topictest"
MQTT_MSG = "hello MQTT"
IoT_protocol_name = "x-amzn-mqtt-ca"
aws_iot_endpoint = "xxxxxxx.iot.eu-west-1.amazonaws.com"
url = "https://{}".format(aws_iot_endpoint)
ca = ".xxxxx/rootCA.pem"
cert = ".xxxxx/xxxxx-certificate.pem.crt"
private = ".xxxxx/xxxxxx-private.pem.key"
logger = logging.getLogger()
logger.setLevel(logging.DEBUG)
handler = logging.StreamHandler(sys.stdout)
log_format = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
handler.setFormatter(log_format)
logger.addHandler(handler)
# Define on connect event function
# We shall subscribe to our Topic in this function
def on_connect(mosq, obj, rc):
mqttc.subscribe(MQTT_TOPIC, 0)
# Define on_message event function.
# This function will be invoked every time,
# a new message arrives for the subscribed topic
def on_message(mosq, obj, msg):
print "Topic: " + str(msg.topic)
print "QoS: " + str(msg.qos)
print "Payload: " + str(msg.payload)
def on_subscribe(mosq, obj, mid, granted_qos):
print("Subscribed to Topic: " +
MQTT_MSG + " with QoS: " + str(granted_qos))
def ssl_alpn():
try:
#debug print opnessl version
logger.info("open ssl version:{}".format(ssl.OPENSSL_VERSION))
ssl_context = ssl.create_default_context()
ssl_context.set_alpn_protocols([IoT_protocol_name])
ssl_context.load_verify_locations(cafile=ca)
ssl_context.load_cert_chain(certfile=cert, keyfile=private)
return ssl_context
except Exception as e:
print("exception ssl_alpn()")
raise e
mqttc = mqtt.Client()
# Assign event callbacks
mqttc.on_message = on_message
mqttc.on_connect = on_connect
mqttc.on_subscribe = on_subscribe
ssl_context= ssl_alpn()
mqttc.tls_set_context(context=ssl_context)
logger.info("start connect")
mqttc.connect(aws_iot_endpoint, port=443)
logger.info("connect success")
mqttc.loop_start()
In Pi, I installed python 2.7.14 and paho-mqtt
But When I run python mqtt_apln.py, it shows error: ImportError: No module named paho.mqtt.client
Any suggestion is appreciated
I think there are two things going on here. First, a pip install paho-mqtt should make the package active for the current referenced python. For instance, under a 3.6.2 virtualenv should return:
$ pip list
DEPRECATION: The default format will switch to columns in the future. You can use --format=(legacy|columns) (or define a format=(legacy|columns) in your
pip.conf under the [list] section) to disable this warning.
paho-mqtt (1.3.1)
pip (9.0.1)
setuptools (28.8.0)
How did you install the paho-mqtt package, via an apt package or directly with pip? Personally I virtualenv everything or include the package within the application directory via pip install package_name -t . to reference the current working directory.
From there it's working with the ALPN configuration. I reduced your code to just publish to the test topic on my end-point and used the AWS IoT console-->Test to subscribe to the test topic. For both python 2.7.12 and 3.6.2 I successfully received messages.
The main changes were to remove the callbacks, place a mqttc.publish followed by a time.sleep(3) to give the thread time to publish, then closed the connection.
Here is the code paired down for the publish only:
import sys
import ssl
import time
import datetime
import logging, traceback
import paho.mqtt.client as mqtt
MQTT_TOPIC = "topictest"
MQTT_MSG = "hello MQTT"
IoT_protocol_name = "x-amzn-mqtt-ca"
aws_iot_endpoint = "xxxxxx.iot.us-east-1.amazonaws.com"
ca = "ca.pem"
cert = "xxxx-certificate.pem.crt"
private = "xxxx-private.pem.key"
logger = logging.getLogger()
logger.setLevel(logging.DEBUG)
handler = logging.StreamHandler(sys.stdout)
log_format = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
handler.setFormatter(log_format)
logger.addHandler(handler)
def ssl_alpn():
try:
#debug print opnessl version
logger.info("open ssl version:{}".format(ssl.OPENSSL_VERSION))
ssl_context = ssl.create_default_context()
ssl_context.set_alpn_protocols([IoT_protocol_name])
ssl_context.load_verify_locations(cafile=ca)
ssl_context.load_cert_chain(certfile=cert, keyfile=private)
return ssl_context
except Exception as e:
print("exception ssl_alpn()")
raise e
mqttc = mqtt.Client()
ssl_context= ssl_alpn()
mqttc.tls_set_context(context=ssl_context)
logger.info("start connect")
mqttc.connect(aws_iot_endpoint, port=443)
logger.info("connect success")
mqttc.loop_start()
# After loop start publish and wait for message to be sent.
# Hard coded delay but would normally tie into event loop
# or on_publish() CB
# JSON payload because, pretty
mqttc.publish('test', '{"foo": "bar"}')
time.sleep(3)
mqttc.loop_stop()
Please let me know if that works for you? It's awesome that AWS now supports MQTT connections on port 443 without having to utilize websockets (and the need for SigV4 credentials).
I ran into the same issue regarding the use of paho-mqtt with aws IoT core.
https://aws.amazon.com/de/blogs/iot/how-to-implement-mqtt-with-tls-client-authentication-on-port-443-from-client-devices-python/
The tutorial uses no clientID. Depending on your security rules you have to deliver a client ID to be able to connect properly. Here an SDK-example ruleset where only the clients "sdk-java", "basicPubSub" and "sdk-nodejs-*" are allowed to connect.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"iot:Publish",
"iot:Receive"
],
"Resource": [
"arn:aws:iot:eu-central-1:036954049003:topic/sdk/test/java",
"arn:aws:iot:eu-central-1:036954049003:topic/sdk/test/Python",
"arn:aws:iot:eu-central-1:036954049003:topic/topic_1",
"arn:aws:iot:eu-central-1:036954049003:topic/topic_2"
]
},
{
"Effect": "Allow",
"Action": [
"iot:Subscribe"
],
"Resource": [
"arn:aws:iot:eu-central-1:036954049003:topicfilter/sdk/test/java",
"arn:aws:iot:eu-central-1:036954049003:topicfilter/sdk/test/Python",
"arn:aws:iot:eu-central-1:036954049003:topicfilter/topic_1",
"arn:aws:iot:eu-central-1:036954049003:topicfilter/topic_2"
]
},
{
"Effect": "Allow",
"Action": [
"iot:Connect"
],
"Resource": [
"arn:aws:iot:eu-central-1:036954049003:client/sdk-java",
"arn:aws:iot:eu-central-1:036954049003:client/basicPubSub",
"arn:aws:iot:eu-central-1:036954049003:client/sdk-nodejs-*"
]
}
]
}
To allow connection in case you have permits based on clientid change this line:
mqttc = mqtt.Client(client_id=MYCLIENTID)
MYCLIENTID is one of the three "sdk-java", "basicPubSub" or "sdk-nodejs-*" for this example.

Google Cloud Dataflow cryptic message when downloading file from gcp to local system

I am writing a dataflow pipeline that processes videos from a google cloud bucket. My pipeline downloads each work item to the local system and then reuploads results back to GCP bucket. Following previous question.
The pipeline works on local DirectRunner, i'm having trouble debugging on DataFlowRunnner.
The error reads
File "run_clouddataflow.py", line 41, in process
File "/usr/local/lib/python2.7/dist-packages/google/cloud/storage/blob.py", line 464, in download_to_file self._do_download(transport, file_obj, download_url, headers)
File "/usr/local/lib/python2.7/dist-packages/google/cloud/storage/blob.py", line 418, in _do_download download.consume(transport) File "/usr/local/lib/python2.7/dist-packages/google/resumable_media/requests/download.py", line 101, in consume self._write_to_stream(result)
File "/usr/local/lib/python2.7/dist-packages/google/resumable_media/requests/download.py", line 62, in _write_to_stream with response: AttributeError: __exit__ [while running 'Run DeepMeerkat']
When trying to execute blob.download_to_file(file_obj) within:
storage_client=storage.Client()
bucket = storage_client.get_bucket(parsed.hostname)
blob=storage.Blob(parsed.path[1:],bucket)
#store local path
local_path="/tmp/" + parsed.path.split("/")[-1]
print('local path: ' + local_path)
with open(local_path, 'wb') as file_obj:
blob.download_to_file(file_obj)
print("Downloaded" + local_path)
I'm guessing that the workers are not in permission to write locally? Or perhaps there is not a /tmp folder in the dataflow container. Where should I write objects? Its hard to debug without access to the environment. Is it possible to access stdout from workers for debugging purposes (serial console?)
EDIT #1
I've tried explicitly passing credentials:
try:
credentials, project = google.auth.default()
except:
os.environ["GOOGLE_APPLICATION_CREDENTIALS"] = known_args.authtoken
credentials, project = google.auth.default()
as well as writing to cwd(), instead of /tmp/
local_path=parsed.path.split("/")[-1]
print('local path: ' + local_path)
with open(local_path, 'wb') as file_obj:
blob.download_to_file(file_obj)
Still getting the cryptic error on blob downloads from gcp.
Full Pipeline script is below, setup.py is here.
import logging
import argparse
import json
import logging
import os
import csv
import apache_beam as beam
from urlparse import urlparse
from google.cloud import storage
##The namespaces inside of clouddataflow workers is not inherited ,
##Please see https://cloud.google.com/dataflow/faq#how-do-i-handle-nameerrors, better to write ugly import statements then to miss a namespace
class PredictDoFn(beam.DoFn):
def process(self,element):
import csv
from google.cloud import storage
from DeepMeerkat import DeepMeerkat
from urlparse import urlparse
import os
import google.auth
DM=DeepMeerkat.DeepMeerkat()
print(os.getcwd())
print(element)
#try adding credentials?
#set credentials, inherent from worker
credentials, project = google.auth.default()
#download element locally
parsed = urlparse(element[0])
#parse gcp path
storage_client=storage.Client(credentials=credentials)
bucket = storage_client.get_bucket(parsed.hostname)
blob=storage.Blob(parsed.path[1:],bucket)
#store local path
local_path=parsed.path.split("/")[-1]
print('local path: ' + local_path)
with open(local_path, 'wb') as file_obj:
blob.download_to_file(file_obj)
print("Downloaded" + local_path)
#Assign input from DataFlow/manifest
DM.process_args(video=local_path)
DM.args.output="Frames"
#Run DeepMeerkat
DM.run()
#upload back to GCS
found_frames=[]
for (root, dirs, files) in os.walk("Frames/"):
for files in files:
fileupper=files.upper()
if fileupper.endswith((".JPG")):
found_frames.append(os.path.join(root, files))
for frame in found_frames:
#create GCS path
path="DeepMeerkat/" + parsed.path.split("/")[-1] + "/" + frame.split("/")[-1]
blob=storage.Blob(path,bucket)
blob.upload_from_filename(frame)
def run():
import argparse
import os
import apache_beam as beam
import csv
import logging
import google.auth
parser = argparse.ArgumentParser()
parser.add_argument('--input', dest='input', default="gs://api-project-773889352370-testing/DataFlow/manifest.csv",
help='Input file to process.')
parser.add_argument('--authtoken', default="/Users/Ben/Dropbox/Google/MeerkatReader-9fbf10d1e30c.json",
help='Input file to process.')
known_args, pipeline_args = parser.parse_known_args()
#set credentials, inherent from worker
try:
credentials, project = google.auth.default()
except:
os.environ["GOOGLE_APPLICATION_CREDENTIALS"] = known_args.authtoken
credentials, project = google.auth.default()
p = beam.Pipeline(argv=pipeline_args)
vids = (p|'Read input' >> beam.io.ReadFromText(known_args.input)
| 'Parse input' >> beam.Map(lambda line: csv.reader([line]).next())
| 'Run DeepMeerkat' >> beam.ParDo(PredictDoFn()))
logging.getLogger().setLevel(logging.INFO)
p.run()
if __name__ == '__main__':
logging.getLogger().setLevel(logging.INFO)
run()
I spoke to the google-cloud-storage package mantainer, this was a known issue. Updating specific versiosn in my setup.py to
REQUIRED_PACKAGES = ["google-cloud-storage==1.3.2","google-auth","requests>=2.18.0"]
fixed the issue.
https://github.com/GoogleCloudPlatform/google-cloud-python/issues/3836

How to import nixos config and merge it with nixops deployment expression

I'm in the process of learning how to use Nix/NixOs/NixOps, and I'm having trouble refactoring a simple NixOps deployment.
My starting point is this working vbox-all.nix file :
{
server =
{ config, pkgs, ... }:
{
# deployment-specific config
deployment.targetEnv = "virtualbox";
deployment.virtualbox.memorySize = 1024; # megabytes
deployment.virtualbox.vcpu = 2; # number of cpus
# postgres-specific config
services.postgresql.enable = true;
services.postgresql.package = pkgs.postgresql96;
# htop-specific config
environment.systemPackages =
[
pkgs.htop
];
};
}
Running nixops create ./vbox.nix -d mydeployment and then nixops deploy -d mydeployment works perfectly : I get a VirtualBox machine with Postgres 9.6 running and htop installed.
Now, having all of this in one file does not seem to be a good idea for long term maintenance.
Here is the file layout I think I want:
.
├── configuration-all.nix # forms a NixOs config with htop, postgres, etc.
├── htop.nix # NixOs config of just htop
├── postgres.nix # NixOs config of just Postgres
└── vbox-all.nix # NixOps config for virtualbox with htop, postgres, etc.
The idea being that vbox-all.nix imports configuration-all.nix which imports all services/packages/conf I might want (currently postgres and htop).
That's what I cannot get to work.
Here is my configuration-all.nix :
{ config, pkgs, ... }:
{
imports = [ ./postgres.nix ./htop.nix ];
}
Here is ./postgres.nix :
{ config, pkgs, ... }:
{
services.postgresql.enable = true;
services.postgresql.package = pkgs.postgresql96;
}
I think you can guess the content of ./htop.nix, and it doesn't really matter anyway.
And finally, my modified vbox-all.nix:
{
server =
{ config, pkgs, ... }:
with (pkgs.callPackage ./configuration-all.nix { });
{
# deployment-specific config
deployment.targetEnv = "virtualbox";
deployment.virtualbox.memorySize = 1024; # megabytes
deployment.virtualbox.vcpu = 2; # number of cpus
};
}
When I re-run nixops deploy -d mydeployment, I don't get any errors but the resulting VM doesn't have neither postgres nor htop.
I must be fundamentally misunderstanding either with or callPackage. For me it should : execute the function defined in ./configuration-all.nix (auto-filling all args) and merge the resulting expression with my "deployment-specific config".
I tried a few things like: replacing pkgs.callPackage with import (still no error, but still no good), using inherit (pkgs.callPackage ./configuration-all.nix { }) instead of with, etc. but so far no dice.
I must be missing something small and probably obvious...
Here is my final working vbox-all.nix I figured out while writing my question.
{
server =
{
imports = [ ./configuration-all.nix ];
# deployment-specific config
deployment.targetEnv = "virtualbox";
deployment.virtualbox.memorySize = 1024; # megabytes
deployment.virtualbox.vcpu = 2; # number of cpus
};
}
Thanks SO, you're a good rubber duck.
I still need to understand why my other attempts with with and inherit did not work, so don't hesitate to comment or post an alternative answer. I have a lot to learn.

Resources