I need to split an image into tiles that fit for map in different zoom levels.
I tried with Gal2tiles but I was not able to get tiles.
I am using PNG image for splitting to tiles.
I developed GDAL2Tiles couple of years back as a student project. It is a command-line utility with available manual http://www.gdal.org/gdal2tiles.html and a step-by-step tutorials online - one of them made by people from Google: https://developers.google.com/kml/articles/raster
In case you are not familiar with command-line you can easily try the MapTiler http://www.maptiler.com/ - the significantly improved and rewritten tool for creating map tiles. MapTiler comes with a user-friendly graphical user interface and is easy to install - on Windows, Mac or Linux.
See a video tutorial for a demonstration of usage:
http://youtu.be/eJxdCe9CNYg
Related
Is there an annotation tool to produce multi layer (for overlapping objects within the image) and pixel exact image annotations on IPad?
Background to my question:
There a lot of annotation tools for Linux and Windows
(e.g. the ones listed here: https://www.v7labs.com/blog/best-image-annotation-tools
or here: https://humansintheloop.org/10-of-the-best-open-source-annotation-tools-for-computer-vision-2021/)
I haven't tried all of them, but non of them seem to be available for the IPad.
I am using the Ipad to make image annotations because it is faster for me to annotate with a stylus than with a mouse on the PC (also I can do annotations when I am not in the office). Further, most annotation tools feel clunky and too overloaded with bureaucracy (this is only my subjective opinion).
I am currently using Adobe Fresco (sucks only because its not open source and a little expensive), which works well in combination with a small script, that I wrote to convert the .psd files into torch tensors.
My workflow with Fresco is fast and the annotations are very precise. However, I was bashed by a reviewer when submitting a paper mentioning that the annotations were produced with Fresco. The paper was rejected because the reviewer thought annotating images with fresco was ridiculous and that there are supposedly much better alternatives (which he did not mention)... and which I am still too dump to find. Any suggestions?
I would like to know how to draw custom lands for an Openstreetmap project. My final purpose is to reproduce a fantasy map with OSM technology.
It's not clear to me how I can generate lands data (continents, islands and so on).
I know is it possible because the project https://opengeofiction.net/ do basically the same thing.
I am a new OSM user and I am moving the firsts steps with GIS software.
I have built my own tile server on the cloud (Ubuntu 18-04) following different tutorials.
I installed JOSM and QGIS to edit maps, but I feel a bit lost with all that options and features.
I already posted questions in openstreetmap forum but I got no response.
I am sure I need only a little hint to get started.
My expected result is to be able to draw a little "imaginary" island.
On a small scale you can use JOSM without OSM download/upload, and just save your edited data locally as an OSM XML file.
That again can then be fed into the different renderers as source file.
On a large scale you would end up creating a copy of the whole OSM stack, serving your own data, like https://wiki.openstreetmap.org/wiki/OpenGeofiction does
I am trying to make a photo editing application for iOS, but am not sure where to start looking. I have attached an image made in Word... that hopefully simply depicts what I am trying to achieve. It will involved manipulating individual pixels of a shape/image and masking/clipping. WHow should I start and what resources are available to me other than the developer docs?
Cheers
If you are not new to programming I would suggest a trial and run kind of approach. If it was me, I would follow a approach like this
Figuring out what to do/ what not to do
Do I need to develop the tech I want from scratch or can I use some pods ?
What are the good reads and example apps - (Try this)
Development approach
Build a photo gallery to pick images from
Build a EDIT mode screen
Get set of template overlay images
Figure out how to overlay them on top of each other
Export the final picture as one picture
The developer documentation is essential when it comes to learning new APIs, but sometimes it can be a little overwhelming. You can try reading raywenderlich.com tutorials on Core Image first to get an idea (link here) or find a book on computer graphics. It is essential to understand at least the underlying techniques to efficiently program image processing code. In many cases you'll find there is a more elegant technique than just looping on pixels and modifying one-by-one.
Then you can continue with reading on image compositing using core image for example.
I'm looking into a solution that will allow to use OpenStreetMap data to render a 2D top-view vector-based map in iOS, instead of using pre-rendered tiles from a server. Similar to Apple and Google Maps in iOS6+.
I've done extensive research on this matter, but didn't found too much information.
There are a number of iOS apps that do this, but no information on how they implement it. A couple of these apps are:
ForeverMap 2 by skobbler
Galileo Offline Maps
OffMaps 2
The first 2 apps work similar to Apple and Google Maps. The map is drawn in real time whenever the zoom changes.
The last one appears to be using a slightly different approach. It renders the vector data at specific zoom levels and creates tiles which are then used as normal tiles downloaded from a tile server. So the rendering engine could actually be a tile source for the Route-Me library, but instead of downloading the tiles it renders them on the fly.
The first method is preferred.
[Q] I guess one could switch between methods fairly easy, once the OpenGL ES renderer is in place. I mean you could use the renderer as a source for Route-Me to create tiles, or you could use it as a real-time drawer, similar to a game. Am I right?
The closest solution I found is OpenStreetPad. However, it is using Core Graphics instead of OpenGL ES, so the rendering is not hardware accelerated.
Mapbox stated they are working on vector tiles and they'll probably provide an iOS solution for rendering, however it may use Mapnik so I am not sure how efficient will that be. And there's no ETA on since mid 2013.
[Q] Do you know of any other libraries, papers, guides, examples, or some other useful information on how to approach this? Basically how to handle the OSM data and how to actually use OpenGL ES / GLKit to draw that data on the device. Maybe some of the people who have done it can share a few things?
Old question, but there's a new answer.
WhirlyGlobe-Maply will render tile based vector maps on iOS. http://mousebirdconsulting.blogspot.com/2014/03/vector-maps-introduction.html
The technology that powered skobbler's ForeverMap 2 and their current GPS Nav & Maps app is now available on a pay-per use basis. See their developer platform.
Note: they also have a free tier that can be used to develop/launch small apps.
They render the map using OpenGL and "vector data tiles". This vector data tiles contain information regarding road geometry (so you can have routing), POI data & other map features. (eg. boundary limits).
There is a list of OSM-based applications for iOS. It also includes a few open source projects, for example Navit. Navit seems to render the map using SDL/OpenGL. See the Navit iOS wiki page for more information.
I am new to iOS programming and am interested in working with images. Basically, I want to be able to obtain the (0,255) and RGB tuples of every pixel in a given image. What would be the best way of doing this? Would I need to use Open GL, or something similar?
Thanks
If you want to work with images, get a copy of Apple's 'Quartz 2D Programming Guide'. If you want even more detailed how-to, get a copy of the "Programming with Quartz" book on Amazon (its says Mac in the title as it predates iOS).
Essentially you are going to take images, draw them into bit map contexts, then determine the rgba layout by querying the image.
If you want to use system resources to assist you in making certain types of changes to images, there is a OSX framework recently moved to iOS called the Accelerate Framework. and it has a lot of functions in it for image manipulation (vImage).
For reading and writing images to the file system look at Apple's 'Image I/O Guide'. For advanced filtering there is Core Image, which allows you to apply filters to images.
EDIT: If you have any interest in really fast accellerated code that uses the GPU to perform some sophisticated filtering, you can checkout Brad Larson's GPU Image project on github.