I want to use niftynet pretrained segmentation model for segmenting custom data. I downloaded the pre trained weights and and modified model_dir path to downloaded one.
However when I run
python3 net_segment.py train -c /home/Container_data/config/promise12_demo_train_config.ini
I am getting the error below.
Caused by op 'save/Assign_17', defined at:
File "net_segment.py", line 8, in <module>
sys.exit(main())
File "/home/NiftyNet/niftynet/__init__.py", line 142, in main
app_driver.run(app_driver.app)
File "/home/NiftyNet/niftynet/engine/application_driver.py", line 197, in run
SESS_STARTED.send(application, iter_msg=None)
File "/usr/local/lib/python3.5/dist-packages/blinker/base.py", line 267, in send
for receiver in self.receivers_for(sender)]
File "/usr/local/lib/python3.5/dist-packages/blinker/base.py", line 267, in <listcomp>
for receiver in self.receivers_for(sender)]
File "/home/NiftyNet/niftynet/engine/handler_model.py", line 109, in restore_model
var_list=to_restore, save_relative_paths=True)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/training/saver.py", line 1102, in __init__
self.build()
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/training/saver.py", line 1114, in build
self._build(self._filename, build_save=True, build_restore=True)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/training/saver.py", line 1151, in _build
build_save=build_save, build_restore=build_restore)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/training/saver.py", line 795, in _build_internal
restore_sequentially, reshape)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/training/saver.py", line 428, in _AddRestoreOps
assign_ops.append(saveable.restore(saveable_tensors, shapes))
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/training/saver.py", line 119, in restore
self.op.get_shape().is_fully_defined())
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/ops/state_ops.py", line 221, in assign
validate_shape=validate_shape)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/ops/gen_state_ops.py", line 61, in assign
use_locking=use_locking, name=name)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/op_def_library.py", line 787, in _apply_op_helper
op_def=op_def)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/util/deprecation.py", line 488, in new_func
return func(*args, **kwargs)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/ops.py", line 3274, in create_op
op_def=op_def)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/ops.py", line 1770, in __init__
self._traceback = tf_stack.extract_stack()
InvalidArgumentError (see above for traceback): Restoring from checkpoint failed. This is most likely due to a mismatch between the current graph and the graph from the checkpoint. Please ensure that you have not altered the graph expected based on the checkpoint. Original error:
Assign requires shapes of both tensors to match. lhs shape= [3,3,61,256] rhs shape= [3,3,3,61,9]
[[node save/Assign_17 (defined at /home/NiftyNet/niftynet/engine/handler_model.py:109) = Assign[T=DT_FLOAT, _class=["loc:#DenseVNet/conv/conv_/w"], use_locking=true, validate_shape=true, _device="/job:localhost/replica:0/task:0/device:GPU:0"](DenseVNet/conv/conv_/w, save/RestoreV2/_35)
https://github.com/tensorflow/models/issues/5390
Above link says to add
--initialize_last_layer = False
--last_layers_contain_logits_only = False
Can some one help me how to get rid of this error.
It seems you are having problems with your last layer. When you use a pretrained model on a new task you probably need to change your last layer to fit your new requirements.
In order to do that you should modify your config file by restoring all vars but last layer:
vars_to_restore = ^((?!(last_layer_name)).)*$
and then set num_classes to suit your new segmentation problem.
You can check transfer learning docs here: https://niftynet.readthedocs.io/en/dev/transfer_learning.html
Related
I'm running in to an error with my Snakemake variant identification pipeline, when the original DAG of jobs is built. I believe this is a memory issue; when I test with a short list of input files, the DAG is constructed without issue, however, when I try with 300+ input paired-fastq, I receive the following error:
Building DAG of jobs...
Traceback (most recent call last):
File "/home//.conda/envs/snakemake/lib/python3.6/site-packages/snakemake/__init__.py", line 633, in snakemake
keepincomplete=keep_incomplete,
File "/home//.conda/envs/snakemake/lib/python3.6/site-packages/snakemake/workflow.py", line 568, in execute
dag.check_incomplete()
File "/home//.conda/envs/snakemake/lib/python3.6/site-packages/snakemake/dag.py", line 281, in check_incomplete
incomplete = self.incomplete_files
File "/home//.conda/envs/snakemake/lib/python3.6/site-packages/snakemake/dag.py", line 402, in incomplete_files
filterfalse(self.needrun, self.jobs),
File "/home/k/.conda/envs/snakemake/lib/python3.6/site-packages/snakemake/dag.py", line 399, in <genexpr>
job.output
File "/home//.conda/envs/snakemake/lib/python3.6/site-packages/snakemake/persistence.py", line 205, in incomplete
return any(map(lambda f: f.exists and marked_incomplete(f), job.output))
File "/home//.conda/envs/snakemake/lib/python3.6/site-packages/snakemake/persistence.py", line 205, in <lambda>
return any(map(lambda f: f.exists and marked_incomplete(f), job.output))
File "/home//.conda/envs/snakemake/lib/python3.6/site-packages/snakemake/persistence.py", line 203, in marked_incomplete
return self._read_record(self._metadata_path, f).get("incomplete", False)
File "/home//.conda/envs/snakemake/lib/python3.6/site-packages/snakemake/persistence.py", line 322, in _read_record_cached
return self._read_record_uncached(subject, id)
File "/home//.conda/envs/snakemake/lib/python3.6/site-packages/snakemake/persistence.py", line 328, in _read_record_uncached
return json.load(f)
File "/home//.conda/envs/snakemake/lib/python3.6/json/__init__.py", line 299, in load
parse_constant=parse_constant, object_pairs_hook=object_pairs_hook, **kw)
File "/home//.conda/envs/snakemake/lib/python3.6/json/__init__.py", line 354, in loads
return _default_decoder.decode(s)
File "/home//.conda/envs/snakemake/lib/python3.6/json/decoder.py", line 339, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/home//.conda/envs/snakemake/lib/python3.6/json/decoder.py", line 357, in raw_decode
raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
I'm not sure how to resolve this - if this is a known bug or if there is a way to define my pipeline to build a less complex DAG? I am including the first section of my Snakemake file as well. I use the rule all to define all desired output files.
################################
#### Mtb bwa/GATK Snakemake ####
################################
import numpy as np
from collections import defaultdict
import pandas as pd
samples_df = pd.read_table('config/tgen_samples2a.tsv',sep = ',').set_index("sample", drop=False)
sample_names = list(samples_df['sample'])
batch_names = list(samples_df['batch'])
#print(sample_names)
# fastq1 input function definition
def fq1_from_sample(wildcards):
return samples_df.loc[wildcards.sample, "fastq_1"]
# fastq2 input function definition
def fq2_from_sample(wildcards):
return samples_df.loc[wildcards.sample, "fastq_2"]
# Define config file. Stores sample names and other things.
configfile: "config/config.yaml"
# Define a rule for running the complete pipeline.
rule all:
wildcard_constraints:
batch="IS-.+"
input:
trim = expand(['results/{batch}/{samp}/trim/{samp}_trim_1.fq.gz'], zip, samp=sample_names,batch=batch_names),
kraken=expand('results/{batch}/{samp}/kraken/{samp}_trim_kr_1.fq.gz', zip, samp=sample_names,batch=batch_names),
bams=expand('results/{batch}/{samp}/bams/{samp}_{mapper}_{ref}_sorted.bam', zip, samp=sample_names,batch=batch_names, ref = config['ref']*len(sample_names), mapper = config['mapper']*len(sample_names)), # When using zip, need to use vectors of equal lengths for all wildcards.
per_samp_run_stats = expand('results/{batch}/{samp}/stats/{samp}_{mapper}_{ref}_combined_stats.csv', zip, samp=sample_names,batch=batch_names, ref = config['ref']*len(sample_names), mapper = config['mapper']*len(sample_names)),
amr_stats=expand('results/{batch}/{samp}/stats/{samp}_{mapper}_{ref}_amr.csv', samp=sample_names,batch=batch_names, ref=config['ref'], mapper=config['mapper']),
cov_stats=expand('results/{batch}/{samp}/stats/{samp}_{mapper}_{ref}_cov_stats.txt', samp=sample_names,batch=batch_names, ref=config['ref'], mapper=config['mapper']),
all_sample_stats=expand('results/{batch}/stats/combined_per_run_sample_stats.csv',batch = batch_names),
vcfs=expand('results/{batch}/{samp}/vars/{samp}_{mapper}_{ref}_{caller}_qfilt.vcf.gz', samp=sample_names,batch=batch_names, ref=config['ref'], mapper=config['mapper'], caller = config['caller']),
ann_vcfs=expand('results/{batch}/{samp}/vars/{samp}_{mapper}_{ref}_gatk_ann.vcf.gz', samp=sample_names,batch=batch_names, ref=config['ref'], mapper=config['mapper'], caller = config['caller']),
fastas=expand('results/{batch}/{samp}/fasta/{samp}_{mapper}_{ref}_{caller}_{filter}.fa', samp=sample_names,batch=batch_names, ref=config['ref'], mapper=config['mapper'], caller = config['caller'], filter=config['filter']),
profiles=expand('results/{batch}/{samp}/stats/{samp}_{mapper}_{ref}_lineage.csv', samp=sample_names,batch=batch_names, ref=config['ref'], mapper=config['mapper'])
# Trim reads for quality.
rule trim_reads:
input:
p1=fq1_from_sample,
p2=fq2_from_sample
output:
trim1='results/{batch}/{sample}/trim/{sample}_trim_1.fq.gz',
trim2='results/{batch}/{sample}/trim/{sample}_trim_2.fq.gz'
log:
'results/{batch}/{sample}/trim/{sample}_trim_reads.log'
shell:
'{config[scripts_dir]}trim_reads.sh {input.p1} {input.p2} {output.trim1} {output.trim2} &>> {log}'
# Filter reads taxonomically with Kraken.
rule taxonomic_filter:
input:
trim1='results/{batch}/{samp}/trim/{samp}_trim_1.fq.gz',
trim2='results/{batch}/{samp}/trim/{samp}_trim_2.fq.gz'
output:
kr1='results/{batch}/{samp}/kraken/{samp}_trim_kr_1.fq.gz',
kr2='results/{batch}/{samp}/kraken/{samp}_trim_kr_2.fq.gz',
kraken_report='results/{batch}/{samp}/kraken/{samp}_kraken.report',
kraken_stats = 'results/{batch}/{samp}/kraken/{samp}_kraken_stats.csv'
log:
'results/{batch}/{samp}/kraken/{samp}_kraken.log'
threads: 8
shell:
'{config[scripts_dir]}run_kraken.sh {input.trim1} {input.trim2} {output.kr1} {output.kr2} {output.kraken_report} &>> {log}'
Thank you in advance for help using Snakemake!
All the best,
I kind of doubt memory is an issue. 300+ is not much, especially if each of them is processed independently of the others.
Try to start from the subset of samples that you say worked and gradually increase it until you see the problem appearing. Perhaps you have some funny value in your sample sheet or in your config? json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0) hints at something like that in my impression.
The answer was from #TroyComi, above: after deleting the .snakemake directory, the issue was resolved. Thank you!
I am trying to load a Pretrained word2vec embeddings that is in gensim keyedvector 'word2vec.kv'
pretrained = KeyedVectors.load(args.pretrained mmap = 'r')
where arg.pretrained is "/ptembs/word2vec.kv"
and iam getting this error
File "main.py", line 60, in main
pretrained = KeyedVectors.load(args.pretrained, mmap = 'r')
File "C:\Users\ASUS\anaconda3\lib\site-packages\gensim\models\keyedvectors.py", line 1553, in load
model = super(WordEmbeddingsKeyedVectors, cls).load(fname_or_handle, **kwargs)
File "C:\Users\ASUS\anaconda3\lib\site-packages\gensim\models\keyedvectors.py", line 228, in load
return super(BaseKeyedVectors, cls).load(fname_or_handle, **kwargs)
File "C:\Users\ASUS\anaconda3\lib\site-packages\gensim\utils.py", line 436, in load obj._load_specials(fname, mmap, compress, subname)
File "C:\Users\ASUS\anaconda3\lib\site-packages\gensim\utils.py", line 478, in _load_specials
val = np.load(subname(fname, attrib), mmap_mode=mmap)
File "C:\Users\ASUS\anaconda3\lib\site-packages\numpy\lib\npyio.py", line 417, in load
fid = stack.enter_context(open(os_fspath(file), "rb"))
FileNotFoundError: [Errno 2] No such file or directory: 'ptembs/word2vec.kv.vectors.npy'
i dont understand why it need word2vec.kv.vectors.npy file ? and i dont have it.
Any idea how to solve this problem?
gensim version 3.8.3
tried it on 4.1.2 also same error.
Where did you get the file 'word2vec.kv'?
If loading that file triggers an error mentioning a 2nd file by name, then that 2nd file should've been created alongside 'word2vec.kv' when it was 1st saved using a .save() operation.
That other file needs to be kept alongside 'word2vec.kv' in order for 'word2vec.kv' to be .load()ed again in the future.
I’m trying to import a course that has multiple videos and is hosted on an ironwood server to a newly deployed juniper server. While importing it throws the following errors in the console.
Following is the console log output
TypeError: Unicode-objects must be encoded before hashing
[2020-12-11 09:29:02,847: INFO/Worker-1] VAL: Video created with id [54454916-47e9-4769-8b41-06062d0b7e8c] and status [external]
[2020-12-11 09:29:02,860: ERROR/Worker-1] [VAL] Transcript save failed to storage for video_id “54454916-47e9-4769-8b41-06062d0b7e8c” language code “en”
Traceback (most recent call last):
File “/edx/app/edxapp/venvs/edxapp/lib/python3.5/site-packages/edxval/models.py”, line 489, in create
video_transcript.transcript.save(file_name, transcript_content)
File “/edx/app/edxapp/venvs/edxapp/lib/python3.5/site-packages/django/db/models/fields/files.py”, line 87, in save
self.name = self.storage.save(name, content, max_length=self.field.max_length)
File “/edx/app/edxapp/venvs/edxapp/lib/python3.5/site-packages/django/core/files/storage.py”, line 52, in save
return self._save(name, content)
File “/edx/app/edxapp/venvs/edxapp/lib/python3.5/site-packages/storages/backends/s3boto3.py”, line 495, in _save
self._save_content(obj, content, parameters=parameters)
File “/edx/app/edxapp/venvs/edxapp/lib/python3.5/site-packages/storages/backends/s3boto3.py”, line 510, in _save_content
obj.upload_fileobj(content, ExtraArgs=put_parameters)
File “/edx/app/edxapp/venvs/edxapp/lib/python3.5/site-packages/boto3/s3/inject.py”, line 513, in object_upload_fileobj
ExtraArgs=ExtraArgs, Callback=Callback, Config=Config)
File “/edx/app/edxapp/venvs/edxapp/lib/python3.5/site-packages/boto3/s3/inject.py”, line 431, in upload_fileobj
return future.result()
File “/edx/app/edxapp/venvs/edxapp/lib/python3.5/site-packages/s3transfer/futures.py”, line 73, in result
return self._coordinator.result()
File “/edx/app/edxapp/venvs/edxapp/lib/python3.5/site-packages/s3transfer/futures.py”, line 233, in result
raise self._exception
File “/edx/app/edxapp/venvs/edxapp/lib/python3.5/site-packages/s3transfer/tasks.py”, line 126, in call
return self._execute_main(kwargs)
File “/edx/app/edxapp/venvs/edxapp/lib/python3.5/site-packages/s3transfer/tasks.py”, line 150, in _execute_main
return_value = self._main(**kwargs)
File “/edx/app/edxapp/venvs/edxapp/lib/python3.5/site-packages/s3transfer/upload.py”, line 692, in _main
client.put_object(Bucket=bucket, Key=key, Body=body, **extra_args)
File “/edx/app/edxapp/venvs/edxapp/lib/python3.5/site-packages/botocore/client.py”, line 317, in _api_call
return self._make_api_call(operation_name, kwargs)
File “/edx/app/edxapp/venvs/edxapp/lib/python3.5/site-packages/botocore/client.py”, line 596, in _make_api_call
request_signer=self._request_signer, context=request_context)
File “/edx/app/edxapp/venvs/edxapp/lib/python3.5/site-packages/botocore/hooks.py”, line 242, in emit_until_response
responses = self._emit(event_name, kwargs, stop_on_response=True)
File “/edx/app/edxapp/venvs/edxapp/lib/python3.5/site-packages/botocore/hooks.py”, line 210, in _emit
response = handler(**kwargs)
File “/edx/app/edxapp/venvs/edxapp/lib/python3.5/site-packages/botocore/handlers.py”, line 209, in conditionally_calculate_md5
calculate_md5(params, **kwargs)
File “/edx/app/edxapp/venvs/edxapp/lib/python3.5/site-packages/botocore/handlers.py”, line 187, in calculate_md5
binary_md5 = _calculate_md5_from_file(body)
File “/edx/app/edxapp/venvs/edxapp/lib/python3.5/site-packages/botocore/handlers.py”, line 201, in _calculate_md5_from_file
md5.update(chunk)
The course is imported from ironwood deployed sever to juniper deployed server.
I went through all the logs and was able to track that an error occurs in edxval while uploading the file to the S3 bucket. So, I checked edxval release versions and version 1.4.3 is the one that fixed S3 bucket upload issue. I updated it and it fixed my issue.
I am trying to add a custom mesh (a torus) .dae file for collision and visual to my .sdf model.
When I run my program, drake visualizer gives the following error
File "/opt/drake/lib/python2.7/site-packages/director/lcmUtils.py", line 119, in handleMessage
callback(msg)
File "/opt/drake/lib/python2.7/site-packages/director/drakevisualizer.py", line 352, in onViewerLoadRobot
self.addLinksFromLCM(msg)
File "/opt/drake/lib/python2.7/site-packages/director/drakevisualizer.py", line 376, in addLinksFromLCM
self.addLink(Link(link), link.robot_num, link.name)
File "/opt/drake/lib/python2.7/site-packages/director/drakevisualizer.py", line 299, in __init__
self.geometry.extend(Geometry.createGeometry(link.name + ' geometry data', g))
File "/opt/drake/lib/python2.7/site-packages/director/drakevisualizer.py", line 272, in createGeometry
polyDataList, visInfo = Geometry.createPolyDataFromFiles(geom)
File "/opt/drake/lib/python2.7/site-packages/director/drakevisualizer.py", line 231, in createPolyDataFromFiles
polyDataList = [ioUtils.readPolyData(filename)]
File "/opt/drake/lib/python2.7/site-packages/director/ioUtils.py", line 25, in readPolyData
raise Exception('Unknown file extension in readPolyData: %s' % filename)
Exception: Unknown file extension in readPolyData: /my_path/model.dae
Since prius.sdf also uses prius.dae, I assume this is possible. What am I doing wrong?
tl;dr drake_visualizer doesn't load dae files. If you put a similarly named .obj file in the same folder it will load that (and you can leave your sdf file still referencing the dae file).
Long answer:
drake_visualizer has a very specific, arbitrary protocol for loading files. Given an arbitrary file name (e.g., my_geometry.dae) it will
Strip off the extension.
Try the following files (in order), loading the first one it finds:
my_geometry.vtm
my_geometry.vtp
my_geometry.obj
original extension.
It can load: vtm, vtp, ply, obj, and stl files.
The worst thing is if you have both a vtp and an obj file in the same folder with the same name and you specify the obj, it'll still favor the vtp file.
I frequently encounter this error in Mako templates using Pylons 0.9.7:
AttributeError: 'NoneType' object has no attribute 'decode'
Usually I've entered a variable name that doesn't exist, tried to use a linbebreak within a code line, or some other minor error. Definitely my fault.
This results in a 'Internal Server Error' in the browser, same thing in the debug view, and a stack trace that starts in HTTPServer and ends with the AttributeError in mako/exceptions.py.
Is there anything I can do to make this easier to debug, like find out the line that exception is being generated on within the Mako template? Thanks!
I am not absolutely sure that it's the same issue, but as far as I remember this used to happen a lot when you do AJAX loads of page fragments. Then you don't get anything more useful than this message.
However if you try to load the address of the AJAX request itself in your browser (replacing post parameters with get parameters if needed), you will get a "normal" debug page.
In my case, it turns out there was a division by 0 error in my template. That was yielding an internal server error, and a very unhelpful stack trace in console output.
I know, it sounds like I shouldn't have had that logic in a template anyway, but in this case, I think it makes sense to do so. Here's the stack trace I get with division by 0:
Exception happened during processing of request from ('127.0.0.1', 50681)
Traceback (most recent call last):
File "/Library/Python/2.6/site-packages/Paste-1.7.4-py2.6.egg/paste/httpserver.py", line 1068, in process_request_in_thread
self.finish_request(request, client_address)
File "/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/SocketServer.py", line 320, in finish_request
self.RequestHandlerClass(request, client_address, self)
File "/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/SocketServer.py", line 615, in __init__
self.handle()
File "/Library/Python/2.6/site-packages/Paste-1.7.4-py2.6.egg/paste/httpserver.py", line 442, in handle
BaseHTTPRequestHandler.handle(self)
File "/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/BaseHTTPServer.py", line 329, in handle
self.handle_one_request()
File "/Library/Python/2.6/site-packages/Paste-1.7.4-py2.6.egg/paste/httpserver.py", line 437, in handle_one_request
self.wsgi_execute()
File "/Library/Python/2.6/site-packages/Paste-1.7.4-py2.6.egg/paste/httpserver.py", line 287, in wsgi_execute
self.wsgi_start_response)
File "/Library/Python/2.6/site-packages/Paste-1.7.4-py2.6.egg/paste/cascade.py", line 130, in __call__
return self.apps[-1](environ, start_response)
File "/Library/Python/2.6/site-packages/Paste-1.7.4-py2.6.egg/paste/registry.py", line 375, in __call__
app_iter = self.application(environ, start_response)
File "/Library/Python/2.6/site-packages/Pylons-0.9.7-py2.6.egg/pylons/middleware.py", line 201, in __call__
self.app, environ, catch_exc_info=True)
File "/Library/Python/2.6/site-packages/Pylons-0.9.7-py2.6.egg/pylons/util.py", line 94, in call_wsgi_application
app_iter = application(environ, start_response)
File "/Library/Python/2.6/site-packages/WebError-0.10.2-py2.6.egg/weberror/evalexception.py", line 235, in __call__
return self.respond(environ, start_response)
File "/Library/Python/2.6/site-packages/WebError-0.10.2-py2.6.egg/weberror/evalexception.py", line 483, in respond
return debug_info.content()
File "/Library/Python/2.6/site-packages/WebError-0.10.2-py2.6.egg/weberror/evalexception.py", line 545, in content
result = tmpl_formatter(self.exc_value)
File "/Library/Python/2.6/site-packages/Pylons-0.9.7-py2.6.egg/pylons/error.py", line 43, in mako_html_data
css=False)
File "/Library/Python/2.6/site-packages/Mako-0.3.2-py2.6.egg/mako/template.py", line 189, in render
return runtime._render(self, self.callable_, args, data)
File "/Library/Python/2.6/site-packages/Mako-0.3.2-py2.6.egg/mako/runtime.py", line 403, in _render
_render_context(template, callable_, context, *args, **_kwargs_for_callable(callable_, data))
File "/Library/Python/2.6/site-packages/Mako-0.3.2-py2.6.egg/mako/runtime.py", line 434, in _render_context
_exec_template(inherit, lclcontext, args=args, kwargs=kwargs)
File "/Library/Python/2.6/site-packages/Mako-0.3.2-py2.6.egg/mako/runtime.py", line 457, in _exec_template
callable_(context, *args, **kwargs)
File "memory:0x1040470d0", line 54, in render_body
File "/Library/Python/2.6/site-packages/Mako-0.3.2-py2.6.egg/mako/exceptions.py", line 88, in __init__
self.records = self._init(traceback)
File "/Library/Python/2.6/site-packages/Mako-0.3.2-py2.6.egg/mako/exceptions.py", line 166, in _init
line = line.decode('ascii', 'replace')
AttributeError: 'NoneType' object has no attribute 'decode'
----------------------------------------