Latitude longitude confusion using GeoDjango and OSMGeoAdmin - django-admin

I'm trying to store a polygon using OSMGeoAdmin map widget and I've declared my admin class like this:
class ProjectAdmin(OSMGeoAdmin):
default_lon = 4600000
default_lat = 180000
default_zoom = 4
And this is my geometry models base class:
class AbstractGeometryModel(models.Model):
class Meta:
abstract = True
geometry = models.MultiPolygonField()
#property
def boundary(self):
boundary = self.geometry.boundary[0]
return [[x[1], x[0]] for x in boundary]
#property
def bbox(self):
box = self.geometry.extent
return [[box[1], box[0]], [box[3], box[2]]]
#property
def centroid(self):
return [self.geometry.centroid[1], self.geometry.centroid[0]]
As mentioned in Django docs, the default SRID should be 4326, and as I checked personally, when you get the extent of a geometry it's in [lat, lon, lat, lon] format, and I needed them to be lon, lat so I wrote those property methods.
The problem is, whenever I deploy my service in a new production env or bring it up from scratch in a local development environment, I see that the lat, lons are stored in reverse format.
to check this problem, I decided to compare the production database with my local development db, and here are the results:
>>> from django.contrib.gis.geos import GEOSGeometry
>>> production_data = GEOSGeometry('0106...(Geometry copied from production db)...1532')
>>> production_data.extent
(17.2922570380823, 44.879874515170165, 18.00644604063442, 46.02245264001086)
>>> production_data.srid
4326
>>> development_data = GEOSGeometry('0106...(Geometry copied from development db)...1532')
>>> development_data.extent
(44.75627832378126, 17.887728306166515, 46.09661035484472, 18.10414680358874)
>>> development_data.srid
4326
As you can see, the SRIDs are the same but data is stored as lat, lon in production db, and also as lon, lat in local db!
I tried to debug to find out what's going on, but I didn't succeed. I'll appreciate it if someone can help me with this problem.
Thanks.
UPDATE
I also run below code both in server and local python console:
local:
Python 3.8.3rc1 (default, May 10 2020, 12:11:09)
[GCC 7.4.0] on linux
Django 2.2.4
>>> from django.contrib.gis.geos import GEOSGeometry
>>> a = GEOSGeometry('SRID=3857;MULTIPOLYGON(((..lots of geo numbers...)))')
>>> a.transofrm(4326)
>>> a.extent
(34.109070627613065, 46.87535791416505, 36.24604033684087, 53.423209475753794)
server:
root#fdfd7ccf489b:/code# python manage.py shell
Python 3.7.6 (default, Feb 26 2020, 15:34:58)
Type 'copyright', 'credits' or 'license' for more information
IPython 7.13.0 -- An enhanced Interactive Python. Type '?' for help.
In [1]: from django.contrib.gis.geos import GEOSGeometry
In [2]: a = GEOSGeometry('SRID=3857;MULTIPOLYGON(((..lots of geo numbers...)))')
In [3]: a.extent
Out[3]: (5218140.9737573, 4043456.9660937, 5947044.4753833, 4334529.1697632)
In [4]: a.transform(4326)
In [5]: a.extent
Out[5]: (46.875357914165036, 34.10907062761305, 53.42320947575378, 36.246040336840856)
so it seems that problem is something related to the GeoDjango version or config...

It seems that the root source of the problem is different versions of GDAL installed on production (GDAL 2.4.0) and local (GDAL 3.0.1) environment.
https://code.djangoproject.com/ticket/31695#no1

Related

Where is MobyLCPSolver?

Where is MobyLCPSolver?
ImportError: cannot import name 'MobyLCPSolver' from 'pydrake.all' (/home/docker/drake/drake-build/install/lib/python3.8/site-packages/pydrake/all.py)
I have the latest Drake and cannot import it.
Can anyone help?
As of pydrake v1.12.0, the MobyLcp C++ API is not bound in Python.
However, if you feed an LCP into Solve() then Drake can choose Moby to solve it. You can take advantage of this to create an instance of MobyLCP:
import numpy as np
from pydrake.all import (
ChooseBestSolver,
MakeSolver,
MathematicalProgram,
)
prog = MathematicalProgram()
x = prog.NewContinuousVariables(2)
prog.AddLinearComplementarityConstraint(np.eye(2), np.array([1, 2]), x)
moby_id = ChooseBestSolver(prog)
moby = MakeSolver(moby_id)
print(moby.SolverName())
# The output is: "Moby LCP".
# The C++ type of the `moby` object is drake::solvers::MobyLCP.
That only allows for calling Moby via the MathematicalProgram interface, however. To call any MobyLCP-specific C++ functions like SolveLcpFastRegularized, those would need to be added to the bindings code specifically before they could be used.
You can file a feature request on the Drake GitHub page when you need access to C++ classes or functions that aren't bound into Python yet, or even better you can open a pull request with the bindings that you need.

My kernel keeps dying in jupyter notebook when i run fit function

My kernel keeps dying when i run fit function
my tensorflow version 2.6.0
i've reinstalled the jupyter notebook, upgraded my pip, upgraded my tensoflow library,
added this line
import os
os.environ['KMP_DUPLICATE_LIB_OK']='True'
and still my kernel keeps dying
this is the code i tried to run
learning_rate_reduction = ReduceLROnPlateau(monitor = 'val_acc', patience = 3, verbose = 1, factor = .5, min_lr = .00001)
es = EarlyStopping(monitor='val_categorical_accuracy', patience = 4)
print('====')
history = model.fit_generator(generator = train_batches, steps_per_epoch = train_batches.n//batch_size, epochs=epochs,
validation_data = val_batches, validation_steps = val_batches.n//batch_size, verbose = 0,
callbacks = [learning_rate_reduction, es])
In my experience you should try one of these
Check Environment Variables
make sure you have CUDA_PATH
make sure you write path to cuda/bin, cuda/include, cuda/lib/x64 in PATH in System Variables
Check if the model is too complex by training the network on smaller simpler model
Make sure anaconda navigator is updated
In my experience python 3.8.5 and tensorflow 2.7 works and can be downloaded in the environments in anaconda navigator
If it broke with simple model, it means you're doing something wrong with the PATH in system environment
If you're using VSCode you might have to set all the variables first before using
If you're using Anaconda you can download directly in download section
im using tensorflow 2.8 and python 3.10 and still works

IBM Cloud Functions - Python Actions

I'm learning how to use Serverless Functions, I'm working trying to connect a Watson assistant through webhooks using a python action that is processing a small dataset, I'm still struggling to succeed on it.
I’ve done my coding on Jupyter environment calling raw csv dataset from Github and using pandas to handle it. The issue is when I’m invoking the action into IBM Functions works 10% of the times. I did debug on Jupyter and Visual Studio environments and the code seems to be ok, but once I move the code to the IBM Functions environment it doesn't perform.
import sys
import csv
import json
import pandas as pd
location = ('Germany') #Passing country parameter for testing purpose
data = pd.read_csv('https://raw.githubusercontent.com/CSSEGISandData/COVID-19/master/csse_covid_19_data/csse_covid_19_daily_reports/03-24-2020.csv')
def main(args):
location = args.get("location")
for index, row in data.iterrows():
currentLoc = row['Country/Region']
if currentLoc == location:
covid_statistics = {
"Province/State": row['Province/State'],
"Country/Region": row['Country/Region'],
"Confirmed":row['Confirmed'],
"Deaths":row['Deaths'],
"Recovered":row['Recovered']
}
return {"message": covid_statistics}
else:
return {"message": "Data not available"}

Image processing in TensorFlow distributed session

I am testing out the Tensorflow Distributed (https://www.tensorflow.org/deploy/distributed) with my local machine (Windows) and Ubuntu VM. Where,
I have followed this link Distributed tensorflow replicated training example: grpc_tensorflow_server - No such file or directory and set up the Tensorflow so called server like as per below.
import tensorflow as tf
parameter_servers = ["10.0.3.15:2222"]
workers = ["10.0.3.15:2222","10.0.3.15:2223"]
cluster = tf.train.ClusterSpec({"local": parameter_servers, "worker": workers})
server = tf.train.Server(cluster, job_name="local", task_index=0)
server.join()
Where “10.0.3.15” – is my Ubuntu local ip address.
In the windows host machine – I am doing some simple image preprocessing using open cv and extending the graph session to the VM. I have used following code for that.
*import tensorflow as tf
from OpenCVTest import *
with tf.Session("grpc:// 10.0.3.15:2222") as sess:
### Open CV calling section ###
img = cv2.imread('Data/ball.jpg')
grey_img = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)
flat_img_array = img.flatten()
x = tf.placeholder(tf.float32, shape=(flat_img_array[0],flat_img_array[1]))
y = tf.multiply(x, x)
sess.run(y)*
I can see that my session is running on my Ubunu machine. Please see below screenshot.
Test_result
[ Note- In the image you would notice, in Windows console I am calling the session and Ubuntu terminal is listening to that same session. ]
But strange thing I have observed is that for the OpenCV preprocessing operation (grey_img = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)) it’s leveraging local OpenCV package. I was in assumption that when I am running a session on any other server it should do all the operation on that server. In my case as I am running the session on Ubuntu VM, it should run all the operation which has been defined with tf.Session("grpc:// 10.0.3.15:2222") in this should also be running on that ubuntu VM leveraging VM’s local packages, but that’s not happening.
Is my understanding of the sess.run(y) distributed correct ? When we run the session in a distributed manner. Does it only extend the graph computation load to another machine through gRPC ?
I would summarize my ask like this - “I am planning to do large pre-preprocessing before feeding the value to the tensor and I want to do it in a distributed way. What would be the better approach to follow ? My initial understanding was I can with tensorflow distributed but with this test I think I may not able to do it.“
Any thoughts would be of real help.
Thank you.

Have problems building neo4j GetAll server extension

I have been trying to build the example GetAll neo4j server extension, but unfortunately I cannot make it work. I installed the Windows version of neo4j and run it as a server. I also installed Python neo4jrestclient and I am accessing neo4j through Python scripts. The following works fine:
from neo4jrestclient.client import GraphDatabase
gdb = GraphDatabase("http://localhost:7474/db/data/")
print gdb.extensions
It gives me "CypherPlugin" and "GremlinPlugin". I want to build the example GetAll server extension, which is Java. I am using Eclipse. I am able to create the jar file in the folder "c:\neo4j_installation_root\neo4j-community-1.7\plugins\GetAll.jar", but when I restart the neo4j server and run the neo4jrestclient it does not show the GetAll server extension. I searched a lot, but in vain. I have lots of experience with C++ and Python, but new to Java. I will really appreciate some help to be able to build neo4j server extensions. It is critically important for my evaluation of neo4j.
Are you sure there is a META-INF/services etc listing the plugin class, and the jar file is created with intermediate dirs (which is not the default in Eclipse export settings) so dirs are seen by the classloader?
Check out the tips at http://docs.neo4j.org/chunked/snapshot/server-plugins.html
You can do get-all with Bulbs (http://bulbflow.com) without building an extension:
>>> from bulbs.neo4jserver import Graph
>>> g = Graph()
>>> g.vertices.get_all()
>>> g.edges.get_all()
Custom models work the same way:
# people.py
from bulbs.model import Node, Relationship
from bulbs.property import String, Integer, DateTime
from bulbs.utils import current_datetime
class Person(Node):
element_type = "person"
name = String(nullable=False)
age = Integer()
class Knows(Relationship):
label = "knows"
created = DateTime(default=current_datetime, nullable=False)
And then call get_all on the model proxies:
>>> from people import Person, Knows
>>> from bulbs.neo4jserver import Graph
>>> g = Graph()
>>> g.add_proxy("people", Person)
>>> g.add_proxy("knows", Knows)
>>> james = g.people.create(name="James")
>>> julie = g.people.create(name="Julie")
>>> g.knows.create(james, julie)
>>> g.people.get_all()
>>> g.knows.get_all()

Resources