Sublime Text 2 + Eval in REPL: Handle is Invalid - f#

Using sublime text 2 on windows 8, I've set my key-bindings to:
[
{
"keys": ["ctrl+alt+f"],
"args": {
"id": "repl_f#",
"file": "config/F/Main.sublime-menu"
},
"command": "run_existing_window_command"
},
{
"keys": ["ctrl+shift+enter"],
"args": {
"scope": "selection"
},
"command": "repl_transfer_current"
}
]
But when I press "ctrl+shift+enter" I get the following error. Does anyone know how to resolve this?
Traceback (most recent call last):
File ".\sublime_plugin.py", line 356, in run_
File ".\text_transfer.py", line 123, in run
File ".\sublimerepl.py", line 437, in find_repl
File ".\repls\subprocess_repl.py", line 185, in is_alive
File ".\subprocess.py", line 705, in poll
File ".\subprocess.py", line 874, in _internal_poll
WindowsError: [Error 6] The handle is invalid

Related

Serilog - RollingFile Sink rolling fails based on the size

I am using Serilog.Sinks.File with version 3.2.0. and I would like to roll-over logs based on the size. Currently, my 'fileSizeLimitBytes' is set 2000 bytes. When the log file size reaches the limit set in 'fileSizeLimitBytes', it does not roll-over fails and fails to log the messages. How can I roll-over the log file based on the size?
logging.json
"WriteTo": [
{
"Name": "Console",
"Args": {
"outputTemplate": "[{Timestamp:HH:mm:ss} {Level}][{ThreadId}] {SourceContext}{NewLine}{Message:lj}{NewLine}{Exception}{NewLine}"
}
},
{
"Name": "File",
"Args": {
"path": "Logs\\Test.log",
"formatter":"Serilog.Formatting.Json.JsonFormatter, Serilog",
"rollingInterval": "Day",
"restrictedToMinimumLevel": "Debug",
"retainedFileCountLimit": 5 ,
"fileSizeLimitBytes": 2000
}
}
I believe you also need to specify rollOnFileSizeLimit: true.

Aws Batch docker s3.download_fileobj() error

Hello I am launching my docker container through aws Batch.
My aws batch keeps failing. I am currently trying to download a file_object and reupload it to a different s3 bucket. Each time I am getting an OSERROR
first time was:
OSError: [Errno 30] Read-only file system
Here is my download function:
def download(self):
s3 = boto3.client('s3')
file_name = self.flow_cells[10:]
try:
with open(file_name, 'wb') as data:
s3.download_fileobj(
self.source_s3_bucket,
self.source_key,
data
)
return True
except botocore.exceptions.ClientError as error:
print(error.response['Error']['Code'])
The error occurs in the s3.download)fileobj call
It gets flagged when it hits data.
The second time I ran this to check for the error I got
OSError: [Errno 5] Input/output error
The following is my container definition.
container_properties = <<CONTAINER_PROPERTIES
{
"command": [
"--object_key", "Ref::object_key",
"--glacier_s3_bucket", "Ref::glacier_s3_bucket",
"--output_s3_bucket", "Ref::output_s3_bucket",
"--default_s3_bucket", "Ref::default_s3_bucket"
],
"environment": [],
"image": "temp_image_name",
"jobRoleArn": "${aws_iam_role.task-role.arn}",
"memory": 1024,
"mountPoints": [],
"privileged": true,
"readonlyRootFilesystem": false,
"ulimits": [],
"vcpus": 1,
"volumes": [],
"jobDefinitionName": "docker-flowcell-restore-${var.environment}"
}
Here is the full log for the program:
File "src/main.py", line 101, in download
17:10:55
data
17:10:55
File "/usr/local/lib/python3.5/dist-packages/boto3/s3/inject.py", line 678, in download_fileobj
17:10:55
return future.result()
17:10:55
File "/usr/local/lib/python3.5/dist-packages/s3transfer/futures.py", line 73, in result
17:10:55
return self._coordinator.result()
17:10:55
File "/usr/local/lib/python3.5/dist-packages/s3transfer/futures.py", line 233, in result
17:10:55
raise self._exception
17:10:55
File "/usr/local/lib/python3.5/dist-packages/s3transfer/tasks.py", line 126, in __call__
17:10:55
return self._execute_main(kwargs)
17:10:55
File "/usr/local/lib/python3.5/dist-packages/s3transfer/tasks.py", line 150, in _execute_main
17:10:55
return_value = self._main(**kwargs)
17:10:55
File "/usr/local/lib/python3.5/dist-packages/s3transfer/download.py", line 583, in _main
17:10:55
fileobj.write(data)
17:10:55
OSError: [Errno 5] Input/output error
the solve for this is to
os.chdir('/tmp')
Inside the code that the docker container will be running.

How to Setup Geojson as Datasource in TileStache

I have successfully installed TileStache in my server.
Now I have a geojson file and want to serve it through TileStache.
I am new to TileStache and I can't find a clear explanation of how to setup a Geojson in TileStache. Best explanation I can found is here, but it uses a shp file as the datasource.
I want to know how to set it using a Geojson as the datasource.
Edit
I tried adding a tes layer to the config file, so my config file looks like this:
{
"cache":
{
"name": "Test",
"path": "/tmp/stache",
"umask": "0000"
},
"layers":
{
"osm":
{
"provider": {"name": "proxy", "provider": "OPENSTREETMAP"},
"png options": {"palette": "http://tilestache.org/example-palette-openstreetmap-mapnik.act"}
},
"example":
{
"provider": {"name": "mapnik", "mapfile": "examples/style.xml"},
"projection": "spherical mercator"
},
"tes":{
"provider": {
"name": "vector", "driver": "GeoJSON",
"parameters": {"file": "tes.geojson"},
"properties": []
}
}
}
}
When I tried to run using tilestache-server.py -c /etc/TileStache/tilestache.cfg, it gives me error like this:
Error loading Tilestache config:
Traceback (most recent call last):
File "/usr/local/bin/tilestache-server.py", line 5, in <module>
pkg_resources.run_script('TileStache==1.50.1', 'tilestache-server.py')
File "/usr/lib/python2.7/dist-packages/pkg_resources.py", line 499, in run_script
self.require(requires)[0].run_script(script_name, ns)
File "/usr/lib/python2.7/dist-packages/pkg_resources.py", line 1235, in run_script
execfile(script_filename, namespace, namespace)
File "/usr/local/lib/python2.7/dist-packages/TileStache-1.50.1-py2.7.egg/EGG-INFO/scripts/tilestache-server.py", line 55, in <module>
app = TileStache.WSGITileServer(config=options.file, autoreload=True)
File "/usr/local/lib/python2.7/dist-packages/TileStache-1.50.1-py2.7.egg/TileStache/__init__.py", line 342, in __init__
self.config = parseConfigfile(config)
File "/usr/local/lib/python2.7/dist-packages/TileStache-1.50.1-py2.7.egg/TileStache/__init__.py", line 107, in parseConfigfile
return Config.buildConfiguration(config_dict, dirpath)
File "/usr/local/lib/python2.7/dist-packages/TileStache-1.50.1-py2.7.egg/TileStache/Config.py", line 218, in buildConfiguration
config.layers[name] = _parseConfigfileLayer(layer_dict, config, dirpath)
File "/usr/local/lib/python2.7/dist-packages/TileStache-1.50.1-py2.7.egg/TileStache/Config.py", line 448, in _parseConfigfileLayer
_class = Providers.getProviderByName(provider_dict['name'])
File "/usr/local/lib/python2.7/dist-packages/TileStache-1.50.1-py2.7.egg/TileStache/Providers.py", line 122, in getProviderByName
from . import Vector
File "/usr/local/lib/python2.7/dist-packages/TileStache-1.50.1-py2.7.egg/TileStache/Vector/__init__.py", line 164, in <module>
from osgeo import ogr, osr
ImportError: No module named osgeo
I can't figure out what is wrong.
ImportError: No module named osgeo
You are missing the GDAL library. It can be quite tricky to install, I got it working on Ubuntu 14.04 by using the PPA ppa:ubuntugis/ubuntugis-unstable, read the instructions in this post over at the GIS Stackexchange.

Cannot connect to legacy database using django-pyodbc

I have installed django-pyodbc and configured my database settings as:
DEV: Windows XP (64bit), Python 3.3, MDAC 2.7
DB: Remote MSSQL 2008
DATABASES = {
'default': {
'ENGINE': 'django_pyodbc',
'HOST': 'my.server.com',
'PORT': '14330',
'USER': 'xxx500',
'PASSWORD': 'passw',
'NAME': 'xxx500',
'OPTIONS': {
'host_is_server': True
},
}
}
I can telnet to the server and I can access the database via 3rd party GUI Aqua Data Studio - so I know there is no firewall issue of login issue
When I try to run this command to introspect the legacy database I get this error...
(myProject) :\Users\...>python manage.py inspectdb
Traceback (most recent call last):
File "manage.py", line 10, in <module>
execute_from_command_line(sys.argv)
File "C:\Python33\lib\site-packages\django\core\management\__init__.py", line 399, in execute_from_command_line
utility.execute()
File "C:\Python33\lib\site-packages\django\core\management\__init__.py", line 392, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "C:\Python33\lib\site-packages\django\core\management\base.py", line 242, in run_from_argv
self.execute(*args, **options.__dict__)
File "C:\Python33\lib\site-packages\django\core\management\base.py", line 285, in execute
output = self.handle(*args, **options)
File "C:\Python33\lib\site-packages\django\core\management\base.py", line 415, in handle
return self.handle_noargs(**options)
File "C:\Python33\lib\site-packages\django\core\management\commands\inspectdb.py", line 27, in handle_noargs
for line in self.handle_inspection(options):
File "C:\Python33\lib\site-packages\django\core\management\commands\inspectdb.py", line 40, in handle_inspection
cursor = connection.cursor()
File "C:\Python33\lib\site-packages\django\db\backends\__init__.py", line 157, in cursor
cursor = self.make_debug_cursor(self._cursor())
File "C:\Python33\lib\site-packages\django_pyodbc\base.py", line 280, in _cursor
autocommit=autocommit)
pyodbc.Error: ('08001', '[08001] [Microsoft][ODBC SQL Server Driver][DBNETLIB]SQL Server does not exist or access denied. (17) (SQLDriverConnect)')
What am I missing? Would appreciate some feedback.
Thanks
I made the following changes:
From
DATABASES = {
'default': {
'ENGINE': 'django_pyodbc',
'HOST': 'my.server.com',
'PORT': '14330',
'USER': 'xxx500',
'PASSWORD': 'passw',
'NAME': 'xxx500',
'OPTIONS': {
'host_is_server': True
},
}
}
To
DATABASES = {
'default': {
...
'HOST': 'my.server.com,14330',
...
}
}
and got the utf-8 error that requires commenting out lines 364-367 in the django_pyodbc/base.py file.
I believe that isn't really the solution you'd like to use; modifying the code of django-pyodbc isn't a good idea. That said, be sure you're using the most current fork of django-pyodbc, which can currently be found here:
https://github.com/lionheart/django-pyodbc/
Here's an example DB configuration for settings.py which I've gotten to work on the following platforms (w/FreeTDS / UnixODBC for Linux):
Windows 7
Ubuntu as a VM under Vagrant
Mac OS/X for local development
RHEL 5 + 6
Here's the configuration:
DATABASES = {
'default': {
'ENGINE': 'django_pyodbc',
'NAME': 'db_name',
'USER': 'db_user',
'PASSWORD': 'your_password',
'HOST': 'database.domain.com,1433',
'PORT': '1433',
'OPTIONS': {
'host_is_server': True,
'autocommit': True,
'unicode_results': True,
'extra_params': 'tds_version=8.0'
},
}
}
You need to add the driver to your database back end.
DATABASES = {
'default': {
.......
'OPTIONS': {
.......
'driver' : 'SQL Server',
.......
},
}
}
String. ODBC Driver to use. Default is "SQL Server" on Windows and "FreeTDS" on other platforms.

Neo4j REST batch Illegal character in path at index 2: ./{0}/properties/

The Request:
[{
"method": "GET",
"to": "/node/1890",
"id": 0
},
{
"method": "PUT",
"to": "{0}/properties/Name",
"body": "NewName",
"id": 1
}]
The Response:
{
"message": "Illegal character in path at index 2: ./{0}/properties/Name",
"exception": "IllegalArgumentException",
"fullname": "java.lang.IllegalArgumentException",
"stacktrace": ["java.net.URI.create(URI.java:859)", "java.net.URI.resolve(URI.java:1043)", "org.neo4j.server.rest.batch.BatchOperations.calculateTargetUri(BatchOperations.java:100)", "org.neo4j.server.rest.batch.BatchOperations.performRequest(BatchOperations.java:181)", "org.neo4j.server.rest.batch.BatchOperations.parseAndPerform(BatchOperations.java:159)", "org.neo4j.server.rest.batch.NonStreamingBatchOperations.performBatchJobs(NonStreamingBatchOperations.java:48)", "org.neo4j.server.rest.web.BatchOperationService.batchProcess(BatchOperationService.java:117)", "org.neo4j.server.rest.web.BatchOperationService.performBatchOperations(BatchOperationService.java:72)", "java.lang.reflect.Method.invoke(Method.java:601)", "org.neo4j.server.rest.security.SecurityFilter.doFilter(SecurityFilter.java:112)"],
"cause": {
"message": "Illegal character in path at index 2: ./{0}/properties/Name",
"exception": "URISyntaxException",
"stacktrace": ["java.net.URI$Parser.fail(URI.java:2829)", "java.net.URI$Parser.checkChars(URI.java:3002)", "java.net.URI$Parser.parseHierarchical(URI.java:3086)", "java.net.URI$Parser.parse(URI.java:3044)", "java.net.URI.<init>(URI.java:595)", "java.net.URI.create(URI.java:857)", "java.net.URI.resolve(URI.java:1043)", "org.neo4j.server.rest.batch.BatchOperations.calculateTargetUri(BatchOperations.java:100)", "org.neo4j.server.rest.batch.BatchOperations.performRequest(BatchOperations.java:181)", "org.neo4j.server.rest.batch.BatchOperations.parseAndPerform(BatchOperations.java:159)", "org.neo4j.server.rest.batch.NonStreamingBatchOperations.performBatchJobs(NonStreamingBatchOperations.java:48)", "org.neo4j.server.rest.web.BatchOperationService.batchProcess(BatchOperationService.java:117)", "org.neo4j.server.rest.web.BatchOperationService.performBatchOperations(BatchOperationService.java:72)", "java.lang.reflect.Method.invoke(Method.java:601)", "org.neo4j.server.rest.security.SecurityFilter.doFilter(SecurityFilter.java:112)"],
"fullname": "java.net.URISyntaxException"
}
}
What's the problem here?
No errors in messages.log or any other logs for that matter. Not sure why the logs are empty. I had to turn off the X-Stream header to get this much info.
Now I know that since I already know the node's ID I can just reference it directly, and I will. However, it seems like it is an issue that should work.
I guess the URL for the 2nd request should be
/node/{0}/properties/foo

Resources