Not able to properly convert the osm data to geojson with osm2geojson on python - geojson

I am running into an issue when converting osm data to geoJson data using osm2geojson. Here is a piece of code that I am using to convert the data.
`
def get_osm_geometry(osm_id):
retries_left = OSM_RETRIES
while retries_left:
response = requests.get(
f"http://overpass-api.de/api/interpreter?data=%5Bout%3Ajson%5D%3Brelation%28{osm_id}%29%3Bout%20geom%3B%0A")
# print(response.text)
if response.status_code == 200:
response_json = json.loads(response.text)
geometry = json2geojson(response_json)
return geometry
else:
#log something
`
The issue I am facing is specifically around osm id 1942601.
I see I am able to get a proper response from overpass API using this and running
[out:json];
relation(1942601);
out geom;
However, the downloaded geometry from the above code when imported to geojson viewer, doesnt show proper shape.
I expect the shapes to match. What could I be missing?

Related

How to Reproject GeoJSON without crs property to WGS84 (to use React Leaflet)

(Please help I've been struggling with the same problem for more than three days...🤖)
I got GeoJSON file from National Statistical Office, which means it's official data- and the coordinates in this file look like this- :
[959394.1770449197,1948599.8772112513],
... ,
[1140386.5164449196,1684523.5489112514],
It's a GeoJSON object without a member named "crs", and as you can see, it's not using the WGS84 datum. Seems like it's coordinates to draw polygons, which are the shape of each district. I assume that there is no problem with data structure.
I tried to create map using this file with React Leaflet, but failed continuously. To find out if it's the problem of GeoJSON that I'm using, I used other GeoJSON files and it worked fine(meaning that interactive map was created on web) - By comparing the GeoJSON files, I found out that coordinates should be in WGS84 if I want to work with leaflet. So I tried to transform GeoJSON to WGS84 using reproject. In my react app project, I installed reproject, epsg and put the code below:
import * as mapData from '../data/sigunguWithPopGeo.json'
import { toWgs84 } from 'reproject'
import 'epsg'
let epsg = require('epsg');
toWgs84(mapData, undefined, epsg);
And the error was returned:
Error: Unable to detect CRS, GeoJSON has no "crs" property.
Thanks for reading this long intro - Finally here is my question.
Is there any way to reproject GeoJSON without "crs" property to WGS84? I also tried making the coordinates WGS84 with mapshaper.org. Again, I got the error caused by undefined coordinate system of GeoJSON file:
Unable to project -- source coordinate system is unknown
Should I consider adding crs property to GeoJSON? It's my very first time to create the interactive map using GeoJSON with React-leaflet, so any kind of advice from people who experienced similar projects would really help me!
Luckily, solved the problem by myself..!
Instead of keep looking for the methods to convert GeoJSON with undefined coordinate system to WGS84, I visited National Statistical Office's website to figure out the code of coordinate system that was used in src data - which was EPSG 5179. Then I converted GeoJSON file from EPSG 5179 to EPSG 4326(WGS84) on MyGeoData Converter. Before downloading the converted data, I checked on the map to see if the coordinates of data was successfully converted to proper lat, lng values. Hope this solution helps who are struggling with similar problems..👩‍🔧

unknown url type: '//drive.google.com/drive/folders/11XfAPOgFv7qJbdUdPpHKy8pt6aItGvyg'

I am trying to use Haar cascade classifier for object detection.I have copied a code for haar cascade algorithm but its not working.It's giving error as
unknown url type: '//drive.google.com/drive/folders/11XfAPOgFv7qJbdUdPpHKy8pt6aItGvyg'
even though this link is working.
import urllib.request, urllib.error, urllib.parse
import cv2
import os
def store_raw_images():
neg_images_link = '//drive.google.com/drive/folders/11XfAPOgFv7qJbdUdPpHKy8pt6aItGvyg'
neg_image_urls = urllib.request.urlopen(neg_images_link).read().decode()
pic_num = 1
if not os.path.exists('neg'):
os.makedirs('neg')
for i in neg_image_urls.split('\n'):
try:
print(i)
urllib.request.urlretrieve(i, "neg/"+str(pic_num)+".jpg")
img = cv2.imread("neg/"+str(pic_num)+".jpg",cv2.IMREAD_GRAYSCALE)
# should be larger than samples / pos pic (so we can place our image on it)
resized_image = cv2.resize(img, (100, 100))
cv2.imwrite("neg/"+str(pic_num)+".jpg",resized_image)
pic_num += 1
except Exception as e:
print(str(e))
store_raw_images()
I am expecting output as set of negative images for creating dataset module for object detection.
I think the missing "https:" at the start of the url is the causing the specific error.
Furthermore, you cannot just load a drive folder when it is not shared (you should use the drive link) and event then it is not optimal, you have to parse the html response and it may not even work.
I strongly suggest you to use a normal HTTP server or the Google Drive python API.

Laravel 5 Intervention make base64 blob from POST API unable to init from given binary data

I'm sorry, it may be a couple issues and not sure how to phrase the question properly. OK, tried a number of solutions over 2 days to no avail...thank you in advance for your help!
A photo is sent from an iPad app using POST API in the format of base64 (no meta data, just the base64 blob). I'm trying to simply decode and save locally.
I'm testing using Postman:
...com/api/register?first_name=John&photo=/9j/4AAQSkZJRgABAQAAAQAB...[base 64 image of about 400kb]
In Laravel, I am using Intervention
$jpg_url = "image-".time().".jpg";
$path = "/public/".$jpg_url;
$base=base64_decode($customer['photo']);
Image::make($base)->save($path);
and getting an "Unable to init from given binary data" error.
Here's what I don't quite understand and would appreciate a TIL5 explanation:
- When I save the POST from iPad directly into the DB with the following:
$photo = $customer->photo = $customer['photo']
The blob in mysql looks good, I can manually copy and paste it to a web decoder fine.
However, when I use postman, $photo has "+" in the base64 changed into spaces and the image doesn't render
Is this a datatype issue? I'm receiving a long blob that is trying to be converted into string? What is the best practice of receiving images from a mobile app?

How to get a bitmap image in ruby?

The google vision API requires a bitmap sent as an argument. I am trying to convert a png from a URL to a bitmap to pass to the google api:
require "google/cloud/vision"
PROJECT_ID = Rails.application.secrets["project_id"]
KEY_FILE = "#{Rails.root}/#{Rails.application.secrets["key_file"]}"
google_vision = Google::Cloud::Vision.new project: PROJECT_ID, keyfile: KEY_FILE
img = open("https://www.google.com/images/branding/googlelogo/2x/googlelogo_color_272x92dp.png").read
image = google_vision.image img
ArgumentError: string contains null byte
This is the source code processing of the gem:
def self.from_source source, vision = nil
if source.respond_to?(:read) && source.respond_to?(:rewind)
return from_io(source, vision)
end
# Convert Storage::File objects to the URL
source = source.to_gs_url if source.respond_to? :to_gs_url
# Everything should be a string from now on
source = String source
# Create an Image from a HTTP/HTTPS URL or Google Storage URL.
return from_url(source, vision) if url? source
# Create an image from a file on the filesystem
if File.file? source
unless File.readable? source
fail ArgumentError, "Cannot read #{source}"
end
return from_io(File.open(source, "rb"), vision)
end
fail ArgumentError, "Unable to convert #{source} to an Image"
end
https://github.com/GoogleCloudPlatform/google-cloud-ruby
Why is it telling me string contains null byte? How can I get a bitmap in ruby?
According to the documentation (which, to be fair, is not exactly easy to find without digging into the source code), Google::Cloud::Vision#image doesn't want the raw image bytes, it wants a path or URL of some sort:
Use Vision::Project#image to create images for the Cloud Vision service.
You can provide a file path:
[...]
Or any publicly-accessible image HTTP/HTTPS URL:
[...]
Or, you can initialize the image with a Google Cloud Storage URI:
So you'd want to say something like:
image = google_vision.image "https://www.google.com/images/branding/googlelogo/2x/googlelogo_color_272x92dp.png"
instead of reading the image data yourself.
Instead of using write you want to use IO.copy_stream as it streams the download straight to the file system instead of reading the whole file into memory and then writing it:
require 'open-uri'
require 'tempfile'
uri = URI("https://www.google.com/images/branding/googlelogo/2x/googlelogo_color_272x92dp.png")
tmp_img = Tempfile.new(uri.path.split('/').last)
IO.copy_stream(open(uri), tmp_img)
Note that you don't need to set the 'r:BINARY' flag as the bytes are just streamed without actually reading the file.
You can then use the file by:
require "google/cloud/vision"
# Use fetch as it raises an error if the key is not present
PROJECT_ID = Rails.application.secrets.fetch("project_id")
# Rails.root is a Pathname object so use `.join` to construct paths
KEY_FILE = Rails.root.join(Rails.application.secrets.fetch("key_file"))
google_vision = Google::Cloud::Vision.new(
project: PROJECT_ID,
keyfile: KEY_FILE
)
image = google_vision.image(File.absolute_path(tmp_img))
When you are done you clean up by calling tmp_img.unlink.
Remember to read things in binary format:
open("https://www.google.com/..._272x92dp.png",'r:BINARY').read
If you forget this it might try and open it as UTF-8 textual data which would cause lots of problems.

Opencv - create png image

As part of my project I wanted to send stream of images using websockets from embedded machine to client application and display them in img tag to achieve streaming.
Firstly I tried to send raw RGB data (752*480*3 - something about 1MB) but in the end I got some problems with encoding image to png in javascript based on my RGB image so I wanted to try to encode my data to PNG firstly and then sent it using websockets.
The thing is, I am having some problems with encoding my data to PNG using OpenCV library that is already used in the project.
Firstly, some code:
websocketBrokerStructure.matrix = cvEncodeImage(0, websocketBrokerStructure.bgrImageToSend, 0);
websocketBrokerStructure.imageDataLeft = websocketBrokerStructure.matrix->rows * websocketBrokerStructure.matrix->cols * websocketBrokerStructure.matrix->step;
websocketBrokerStructure.imageDataSent = 0;
but I am getting strange error during execution of the second line:
terminate called after throwing an instance of 'std::logic_error'
what(): basic_string::_S_construct NULL not valid
and I am a bit confused why I am getting this error from my code.
Also I am wondering if I understand it right: after invoking cvEncodeImage (where bgrImage is IplImage* with 3 channels - BGR) I just need to iterate through data member of my CvMatto get all of the png encoded data?
The cvEncodeImage function takes as its first parameter the extension of the image you want to encode. You are passing 0, which is the same thing as NULL. That's why you are getting the message NULL not valid.
You should probably use this:
websocketBrokerStructure.matrix = cvEncodeImage(".png", websocketBrokerStructure.bgrImageToSend, 0);
You can check out the documentation of cvEncodeImage here.
You can check out some examples of cvEncodeImage, or its C++ brother imencode here: encode_decode_test.cpp. They also show some parameters you can pass to cvEncodeImage in case you want to adjust them.

Resources