I have successfully installed TileStache in my server.
Now I have a geojson file and want to serve it through TileStache.
I am new to TileStache and I can't find a clear explanation of how to setup a Geojson in TileStache. Best explanation I can found is here, but it uses a shp file as the datasource.
I want to know how to set it using a Geojson as the datasource.
Edit
I tried adding a tes layer to the config file, so my config file looks like this:
{
"cache":
{
"name": "Test",
"path": "/tmp/stache",
"umask": "0000"
},
"layers":
{
"osm":
{
"provider": {"name": "proxy", "provider": "OPENSTREETMAP"},
"png options": {"palette": "http://tilestache.org/example-palette-openstreetmap-mapnik.act"}
},
"example":
{
"provider": {"name": "mapnik", "mapfile": "examples/style.xml"},
"projection": "spherical mercator"
},
"tes":{
"provider": {
"name": "vector", "driver": "GeoJSON",
"parameters": {"file": "tes.geojson"},
"properties": []
}
}
}
}
When I tried to run using tilestache-server.py -c /etc/TileStache/tilestache.cfg, it gives me error like this:
Error loading Tilestache config:
Traceback (most recent call last):
File "/usr/local/bin/tilestache-server.py", line 5, in <module>
pkg_resources.run_script('TileStache==1.50.1', 'tilestache-server.py')
File "/usr/lib/python2.7/dist-packages/pkg_resources.py", line 499, in run_script
self.require(requires)[0].run_script(script_name, ns)
File "/usr/lib/python2.7/dist-packages/pkg_resources.py", line 1235, in run_script
execfile(script_filename, namespace, namespace)
File "/usr/local/lib/python2.7/dist-packages/TileStache-1.50.1-py2.7.egg/EGG-INFO/scripts/tilestache-server.py", line 55, in <module>
app = TileStache.WSGITileServer(config=options.file, autoreload=True)
File "/usr/local/lib/python2.7/dist-packages/TileStache-1.50.1-py2.7.egg/TileStache/__init__.py", line 342, in __init__
self.config = parseConfigfile(config)
File "/usr/local/lib/python2.7/dist-packages/TileStache-1.50.1-py2.7.egg/TileStache/__init__.py", line 107, in parseConfigfile
return Config.buildConfiguration(config_dict, dirpath)
File "/usr/local/lib/python2.7/dist-packages/TileStache-1.50.1-py2.7.egg/TileStache/Config.py", line 218, in buildConfiguration
config.layers[name] = _parseConfigfileLayer(layer_dict, config, dirpath)
File "/usr/local/lib/python2.7/dist-packages/TileStache-1.50.1-py2.7.egg/TileStache/Config.py", line 448, in _parseConfigfileLayer
_class = Providers.getProviderByName(provider_dict['name'])
File "/usr/local/lib/python2.7/dist-packages/TileStache-1.50.1-py2.7.egg/TileStache/Providers.py", line 122, in getProviderByName
from . import Vector
File "/usr/local/lib/python2.7/dist-packages/TileStache-1.50.1-py2.7.egg/TileStache/Vector/__init__.py", line 164, in <module>
from osgeo import ogr, osr
ImportError: No module named osgeo
I can't figure out what is wrong.
ImportError: No module named osgeo
You are missing the GDAL library. It can be quite tricky to install, I got it working on Ubuntu 14.04 by using the PPA ppa:ubuntugis/ubuntugis-unstable, read the instructions in this post over at the GIS Stackexchange.
Related
I am using python Jira package and trying to create Jira issue:
from jira import JIRA
jiraOptions = {'server': "http://jira.xxx.com"}
jira = JIRA(options = jiraOptions, basic_auth=(
"xxx", "xxx"))
def create_new_issue(project, summary, description, issuetype, username):
issue_dict = {
'project': {'key': project},
'summary': summary,
'description': description,
'issuetype': {'name': issuetype},
'reporter': {'name': username}
}
new_issue = jira.create_issue(fields=issue_dict)
create_new_issue("p1", "test", "teseset", "Bug", "xxxx")
I get 405 error which I can't figure out where I got wrong:
Traceback (most recent call last):
File ".\jiraUtil.py", line 97, in <module>
create_new_issue("p1", "test", "teseset", "Bug", "xxxx")
File ".\jiraUtil.py", line 58, in create_new_issue
new_issue = jira.create_issue(fields=issue_dict)
File "C:\Users\xxx\AppData\Local\Programs\Python\Python38\lib\site-packages\jira\client.py", line 1473, in create_issue
r = self._session.post(url, data=json.dumps(data))
File "C:\Users\xxx\AppData\Local\Programs\Python\Python38\lib\site-packages\jira\resilientsession.py", line 198, in post
return self.__verb("POST", str(url), data=data, json=json, **kwargs)
File "C:\Users\xxx\AppData\Local\Programs\Python\Python38\lib\site-packages\jira\resilientsession.py", line 189, in __verb
raise_on_error(response, verb=verb, **kwargs)
File "C:\Users\xxx\AppData\Local\Programs\Python\Python38\lib\site-packages\jira\resilientsession.py", line 64, in raise_on_error
raise JIRAError(
jira.exceptions.JIRAError: JiraError HTTP 405 url: https://jira.xxx.com/rest/api/2/issue
At a guess, "xxxx" user doesn't exist in your Jira Cloud instance, try changing it to "-1" which is unassigned.
I ran into some issues myself doing this, so I built this project - https://github.com/dren79/JiraScripting_public let me know if it helps :)
I am trying to run a object detection code in Aws. Although opencv is listed in the requirement file, i have the error "no module named cv2". I am not sure how to fix this error. could someone help me please.
My requirement.txt file has
opencv-python
numpy>=1.18.2
scipy>=1.4.1
wget>=3.2
tensorflow==2.3.1
tensorflow-gpu==2.3.1
tqdm==4.43.0
pandas
boto3
awscli
urllib3
mss
I tried installing "imgaug" and "opencv-python headless" as well.. but still not able to get rid of this error.
sh-4.2$ python train_launch.py
[INFO-ROLE] arn:aws:iam::021945294007:role/service-role/AmazonSageMaker-ExecutionRole-20200225T145269
train_instance_type has been renamed in sagemaker>=2.
See: https://sagemaker.readthedocs.io/en/stable/v2.html for details.
train_instance_count has been renamed in sagemaker>=2.
See: https://sagemaker.readthedocs.io/en/stable/v2.html for details.
train_instance_type has been renamed in sagemaker>=2.
See: https://sagemaker.readthedocs.io/en/stable/v2.html for details.
2021-04-14 13:29:58 Starting - Starting the training job...
2021-04-14 13:30:03 Starting - Launching requested ML instances......
2021-04-14 13:31:11 Starting - Preparing the instances for training......
2021-04-14 13:32:17 Downloading - Downloading input data...
2021-04-14 13:32:41 Training - Downloading the training image..WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow_core/__init__.py:1473: The name tf.estimator.inputs is deprecated. Please use tf.compat.v1.estimator.inputs instead.
2021-04-14 13:33:03,970 sagemaker-containers INFO Imported framework sagemaker_tensorflow_container.training
2021-04-14 13:33:05,030 sagemaker-containers INFO Invoking user script
Training Env:
{
"additional_framework_parameters": {},
"channel_input_dirs": {
"training": "/opt/ml/input/data/training"
},
"current_host": "algo-1",
"framework_module": "sagemaker_tensorflow_container.training:main",
"hosts": [
"algo-1"
],
"hyperparameters": {
"unfreezed_epochs": 2,
"freezed_batch_size": 8,
"freezed_epochs": 1,
"unfreezed_batch_size": 8,
"model_dir": "s3://sagemaker-dataset-ai/Dataset/yolo/Results/yolov4_small/yolov4-2021-04-14-15-29/model"
},
"input_config_dir": "/opt/ml/input/config",
"input_data_config": {
"training": {
"TrainingInputMode": "File",
"S3DistributionType": "FullyReplicated",
"RecordWrapperType": "None"
}
},
"input_dir": "/opt/ml/input",
"is_master": true,
"job_name": "yolov4-2021-04-14-15-29",
"log_level": 20,
"master_hostname": "algo-1",
"model_dir": "/opt/ml/model",
"module_dir": "s3://sagemaker-dataset-ai/Dataset/yolo/Results/yolov4_smal/yolov4-2021-04-14-15-29/source/sourcedir.tar.gz",
"module_name": "train_indu",
"network_interface_name": "eth0",
"num_cpus": 8,
"num_gpus": 1,
"output_data_dir": "/opt/ml/output/data",
"output_dir": "/opt/ml/output",
"output_intermediate_dir": "/opt/ml/output/intermediate",
"resource_config": {
"current_host": "algo-1",
"hosts": [
"algo-1"
],
"network_interface_name": "eth0"
},
"user_entry_point": "train_indu.py"
}
Environment variables:
SM_HOSTS=["algo-1"]
SM_NETWORK_INTERFACE_NAME=eth0
SM_HPS={"freezed_batch_size":8,"freezed_epochs":1,"model_dir":"s3://sagemaker-dataset-ai/Dataset/yolo/Results/yolov4_small/yolov4-2021-04-14-15-29/model","unfreezed_batch_size":8,"unfreezed_epochs":2}
SM_USER_ENTRY_POINT=train_indu.py
SM_FRAMEWORK_PARAMS={}
SM_RESOURCE_CONFIG={"current_host":"algo-1","hosts":["algo-1"],"network_interface_name":"eth0"}
SM_INPUT_DATA_CONFIG={"training":{"RecordWrapperType":"None","S3DistributionType":"FullyReplicated","TrainingInputMode":"File"}}
SM_OUTPUT_DATA_DIR=/opt/ml/output/data
SM_CHANNELS=["training"]
SM_CURRENT_HOST=algo-1
SM_MODULE_NAME=train_indu
SM_LOG_LEVEL=20
SM_FRAMEWORK_MODULE=sagemaker_tensorflow_container.training:main
SM_INPUT_DIR=/opt/ml/input
SM_INPUT_CONFIG_DIR=/opt/ml/input/config
SM_OUTPUT_DIR=/opt/ml/output
SM_NUM_CPUS=8
SM_NUM_GPUS=1
SM_MODEL_DIR=/opt/ml/model
SM_MODULE_DIR=s3://sagemaker-dataset-ai/Dataset/yolo/Results/yolov4_smal/yolov4-2021-04-14-15-29/source/sourcedir.tar.gz
SM_TRAINING_ENV={"additional_framework_parameters":{},"channel_input_dirs":{"training":"/opt/ml/input/data/training"},"current_host":"algo-1","framework_module":"sagemaker_tensorflow_container.training:main","hosts":["algo-1"],"hyperparameters":{"freezed_batch_size":8,"freezed_epochs":1,"model_dir":"s3://sagemaker-dataset-ai/Dataset/yolo/Results/yolov4_small/yolov4-2021-04-14-15-29/model","unfreezed_batch_size":8,"unfreezed_epochs":2},"input_config_dir":"/opt/ml/input/config","input_data_config":{"training":{"RecordWrapperType":"None","S3DistributionType":"FullyReplicated","TrainingInputMode":"File"}},"input_dir":"/opt/ml/input","is_master":true,"job_name":"yolov4-2021-04-14-15-29","log_level":20,"master_hostname":"algo-1","model_dir":"/opt/ml/model","module_dir":"s3://sagemaker-dataset-ai/Dataset/yolo/Results/yolov4_smal/yolov4-2021-04-14-15-29/source/sourcedir.tar.gz","module_name":"train_indu","network_interface_name":"eth0","num_cpus":8,"num_gpus":1,"output_data_dir":"/opt/ml/output/data","output_dir":"/opt/ml/output","output_intermediate_dir":"/opt/ml/output/intermediate","resource_config":{"current_host":"algo-1","hosts":["algo-1"],"network_interface_name":"eth0"},"user_entry_point":"train_indu.py"}
SM_USER_ARGS=["--freezed_batch_size","8","--freezed_epochs","1","--model_dir","s3://sagemaker-dataset-ai/Dataset/yolo/Results/yolov4_small/yolov4-2021-04-14-15-29/model","--unfreezed_batch_size","8","--unfreezed_epochs","2"]
SM_OUTPUT_INTERMEDIATE_DIR=/opt/ml/output/intermediate
SM_CHANNEL_TRAINING=/opt/ml/input/data/training
SM_HP_UNFREEZED_EPOCHS=2
SM_HP_FREEZED_BATCH_SIZE=8
SM_HP_FREEZED_EPOCHS=1
SM_HP_UNFREEZED_BATCH_SIZE=8
SM_HP_MODEL_DIR=s3://sagemaker-dataset-ai/Dataset/yolo/Results/yolov4_small/yolov4-2021-04-14-15-29/model
PYTHONPATH=/opt/ml/code:/usr/local/bin:/usr/lib/python36.zip:/usr/lib/python3.6:/usr/lib/python3.6/lib-dynload:/usr/local/lib/python3.6/dist-packages:/usr/lib/python3/dist-packages
Invoking script with the following command:
/usr/bin/python3 train_indu.py --freezed_batch_size 8 --freezed_epochs 1 --model_dir s3://sagemaker-dataset-ai/Dataset/yolo/Results/yolov4_small/yolov4-2021-04-14-15-29/model --unfreezed_batch_size 8 --unfreezed_epochs 2
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow_core/__init__.py:1473: The name tf.estimator.inputs is deprecated. Please use tf.compat.v1.estimator.inputs instead.
[name: "/device:CPU:0"
device_type: "CPU"
memory_limit: 268435456
locality {
}
incarnation: 4667030854237447206
, name: "/device:XLA_CPU:0"
device_type: "XLA_CPU"
memory_limit: 17179869184
locality {
}
incarnation: 3059419181456814147
physical_device_desc: "device: XLA_CPU device"
, name: "/device:XLA_GPU:0"
device_type: "XLA_GPU"
memory_limit: 17179869184
locality {
}
incarnation: 6024475084695919958
physical_device_desc: "device: XLA_GPU device"
, name: "/device:GPU:0"
device_type: "GPU"
memory_limit: 14949928141
locality {
bus_id: 1
links {
}
}
incarnation: 13034103301168381073
physical_device_desc: "device: 0, name: Tesla T4, pci bus id: 0000:00:1e.0, compute capability: 7.5"
]
Traceback (most recent call last):
File "train_indu.py", line 12, in <module>
from yolov3.dataset import Dataset
File "/opt/ml/code/yolov3/dataset.py", line 3, in <module>
import cv2
ModuleNotFoundError: No module named 'cv2'
2021-04-14 13:33:08,453 sagemaker-containers ERROR ExecuteUserScriptError:
Command "/usr/bin/python3 train_indu.py --freezed_batch_size 8 --freezed_epochs 1 --model_dir s3://sagemaker-dataset-ai/Dataset/yolo/Results/yolov4_small/yolov4-2021-04-14-15-29/model --unfreezed_batch_size 8 --unfreezed_epochs 2"
2021-04-14 13:33:11 Uploading - Uploading generated training model
2021-04-14 13:33:54 Failed - Training job failed
Traceback (most recent call last):
File "train_launch.py", line 41, in <module>
estimator.fit(s3_data_path, logs=True, job_name=job_name) #the argument logs is crucial if you want to see what happends
File "/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.6/site-packages/sagemaker/estimator.py", line 535, in fit
self.latest_training_job.wait(logs=logs)
File "/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.6/site-packages/sagemaker/estimator.py", line 1210, in wait
self.sagemaker_session.logs_for_job(self.job_name, wait=True, log_type=logs)
File "/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.6/site-packages/sagemaker/session.py", line 3365, in logs_for_job
self._check_job_status(job_name, description, "TrainingJobStatus")
File "/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.6/site-packages/sagemaker/session.py", line 2957, in _check_job_status
actual_status=status,
sagemaker.exceptions.UnexpectedStatusException: Error for Training job yolov4-2021-04-14-15-29: Failed. Reason: AlgorithmError: ExecuteUserScriptError:
Command "/usr/bin/python3 train_indu.py --freezed_batch_size 8 --freezed_epochs 1 --model_dir s3://sagemaker-dataset-ai/Dataset/yolo/Results/yolov4_small/yolov4-2021-04-14-15-29/model --unfreezed_batch_size 8 --unfreezed_epochs 2"
Make sure your estimator has
framework_version = '2.3',
py_version = 'py37',
Hello I am launching my docker container through aws Batch.
My aws batch keeps failing. I am currently trying to download a file_object and reupload it to a different s3 bucket. Each time I am getting an OSERROR
first time was:
OSError: [Errno 30] Read-only file system
Here is my download function:
def download(self):
s3 = boto3.client('s3')
file_name = self.flow_cells[10:]
try:
with open(file_name, 'wb') as data:
s3.download_fileobj(
self.source_s3_bucket,
self.source_key,
data
)
return True
except botocore.exceptions.ClientError as error:
print(error.response['Error']['Code'])
The error occurs in the s3.download)fileobj call
It gets flagged when it hits data.
The second time I ran this to check for the error I got
OSError: [Errno 5] Input/output error
The following is my container definition.
container_properties = <<CONTAINER_PROPERTIES
{
"command": [
"--object_key", "Ref::object_key",
"--glacier_s3_bucket", "Ref::glacier_s3_bucket",
"--output_s3_bucket", "Ref::output_s3_bucket",
"--default_s3_bucket", "Ref::default_s3_bucket"
],
"environment": [],
"image": "temp_image_name",
"jobRoleArn": "${aws_iam_role.task-role.arn}",
"memory": 1024,
"mountPoints": [],
"privileged": true,
"readonlyRootFilesystem": false,
"ulimits": [],
"vcpus": 1,
"volumes": [],
"jobDefinitionName": "docker-flowcell-restore-${var.environment}"
}
Here is the full log for the program:
File "src/main.py", line 101, in download
17:10:55
data
17:10:55
File "/usr/local/lib/python3.5/dist-packages/boto3/s3/inject.py", line 678, in download_fileobj
17:10:55
return future.result()
17:10:55
File "/usr/local/lib/python3.5/dist-packages/s3transfer/futures.py", line 73, in result
17:10:55
return self._coordinator.result()
17:10:55
File "/usr/local/lib/python3.5/dist-packages/s3transfer/futures.py", line 233, in result
17:10:55
raise self._exception
17:10:55
File "/usr/local/lib/python3.5/dist-packages/s3transfer/tasks.py", line 126, in __call__
17:10:55
return self._execute_main(kwargs)
17:10:55
File "/usr/local/lib/python3.5/dist-packages/s3transfer/tasks.py", line 150, in _execute_main
17:10:55
return_value = self._main(**kwargs)
17:10:55
File "/usr/local/lib/python3.5/dist-packages/s3transfer/download.py", line 583, in _main
17:10:55
fileobj.write(data)
17:10:55
OSError: [Errno 5] Input/output error
the solve for this is to
os.chdir('/tmp')
Inside the code that the docker container will be running.
Using sublime text 2 on windows 8, I've set my key-bindings to:
[
{
"keys": ["ctrl+alt+f"],
"args": {
"id": "repl_f#",
"file": "config/F/Main.sublime-menu"
},
"command": "run_existing_window_command"
},
{
"keys": ["ctrl+shift+enter"],
"args": {
"scope": "selection"
},
"command": "repl_transfer_current"
}
]
But when I press "ctrl+shift+enter" I get the following error. Does anyone know how to resolve this?
Traceback (most recent call last):
File ".\sublime_plugin.py", line 356, in run_
File ".\text_transfer.py", line 123, in run
File ".\sublimerepl.py", line 437, in find_repl
File ".\repls\subprocess_repl.py", line 185, in is_alive
File ".\subprocess.py", line 705, in poll
File ".\subprocess.py", line 874, in _internal_poll
WindowsError: [Error 6] The handle is invalid
I have installed django-pyodbc and configured my database settings as:
DEV: Windows XP (64bit), Python 3.3, MDAC 2.7
DB: Remote MSSQL 2008
DATABASES = {
'default': {
'ENGINE': 'django_pyodbc',
'HOST': 'my.server.com',
'PORT': '14330',
'USER': 'xxx500',
'PASSWORD': 'passw',
'NAME': 'xxx500',
'OPTIONS': {
'host_is_server': True
},
}
}
I can telnet to the server and I can access the database via 3rd party GUI Aqua Data Studio - so I know there is no firewall issue of login issue
When I try to run this command to introspect the legacy database I get this error...
(myProject) :\Users\...>python manage.py inspectdb
Traceback (most recent call last):
File "manage.py", line 10, in <module>
execute_from_command_line(sys.argv)
File "C:\Python33\lib\site-packages\django\core\management\__init__.py", line 399, in execute_from_command_line
utility.execute()
File "C:\Python33\lib\site-packages\django\core\management\__init__.py", line 392, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "C:\Python33\lib\site-packages\django\core\management\base.py", line 242, in run_from_argv
self.execute(*args, **options.__dict__)
File "C:\Python33\lib\site-packages\django\core\management\base.py", line 285, in execute
output = self.handle(*args, **options)
File "C:\Python33\lib\site-packages\django\core\management\base.py", line 415, in handle
return self.handle_noargs(**options)
File "C:\Python33\lib\site-packages\django\core\management\commands\inspectdb.py", line 27, in handle_noargs
for line in self.handle_inspection(options):
File "C:\Python33\lib\site-packages\django\core\management\commands\inspectdb.py", line 40, in handle_inspection
cursor = connection.cursor()
File "C:\Python33\lib\site-packages\django\db\backends\__init__.py", line 157, in cursor
cursor = self.make_debug_cursor(self._cursor())
File "C:\Python33\lib\site-packages\django_pyodbc\base.py", line 280, in _cursor
autocommit=autocommit)
pyodbc.Error: ('08001', '[08001] [Microsoft][ODBC SQL Server Driver][DBNETLIB]SQL Server does not exist or access denied. (17) (SQLDriverConnect)')
What am I missing? Would appreciate some feedback.
Thanks
I made the following changes:
From
DATABASES = {
'default': {
'ENGINE': 'django_pyodbc',
'HOST': 'my.server.com',
'PORT': '14330',
'USER': 'xxx500',
'PASSWORD': 'passw',
'NAME': 'xxx500',
'OPTIONS': {
'host_is_server': True
},
}
}
To
DATABASES = {
'default': {
...
'HOST': 'my.server.com,14330',
...
}
}
and got the utf-8 error that requires commenting out lines 364-367 in the django_pyodbc/base.py file.
I believe that isn't really the solution you'd like to use; modifying the code of django-pyodbc isn't a good idea. That said, be sure you're using the most current fork of django-pyodbc, which can currently be found here:
https://github.com/lionheart/django-pyodbc/
Here's an example DB configuration for settings.py which I've gotten to work on the following platforms (w/FreeTDS / UnixODBC for Linux):
Windows 7
Ubuntu as a VM under Vagrant
Mac OS/X for local development
RHEL 5 + 6
Here's the configuration:
DATABASES = {
'default': {
'ENGINE': 'django_pyodbc',
'NAME': 'db_name',
'USER': 'db_user',
'PASSWORD': 'your_password',
'HOST': 'database.domain.com,1433',
'PORT': '1433',
'OPTIONS': {
'host_is_server': True,
'autocommit': True,
'unicode_results': True,
'extra_params': 'tds_version=8.0'
},
}
}
You need to add the driver to your database back end.
DATABASES = {
'default': {
.......
'OPTIONS': {
.......
'driver' : 'SQL Server',
.......
},
}
}
String. ODBC Driver to use. Default is "SQL Server" on Windows and "FreeTDS" on other platforms.