I'm trying to extract some a data (a map image) from a PNG file which is tiled somehow. The file itself is only 256x256 pixels (according to 'get info' on the mac) but is is 23MB. It is from an iPad app called Mud Map and it contains a map that I purchased but I've lost the original that I converted to this format. When I view this file (renamed to a .PNG) I see one section of the map - 256x256px.
I'm asking this question on StackOverflow because I want to know more about these tiled images. How does one create a tiled PNG and what is the software that will open and or create these things. I'm interested in what metadata is required too. I'm loving the outdoors and mapping!!
The answer to this question, is that it cannot be done in manner I have described.
The images in the PNG are not tiled, the the files are just merged together which is no doubt an individual feature of the program as is it does not appear to be any kind of standard.
I have no access to application you mentioned in IPad. Just share some thought about possible situation here.
1) Map tiles are commonly used in GIS web application such as Google maps and so on. It is used to improve the performance especially when user pan very often. A map displayed typical map window is divided into for instance 4*4 separate calls. So maybe only 4 call will be made when user just pan a little bit instead of get the whole map for the 16 tiles.
The source image for this tiles can be in pre-generated tiles or just one static map.
2) Assemble separate images to one in GIS is called image mosaic function. GIS server can read a collection of images and mosaiced them into one with the overlapping part handle based on a certain rule. And the images are in pre defined grid format which are seamless and no overlapping, then it is called tiled images. We could pre-generated the tiles from one mosaiced image, or we can server it on the fly. Some GIS server/library/application does have the tile server function built in.
Related
I'm currently an MS student in Medical Physics and I have a great need to be able to overlay an isodose distribution from an RTDOSE file onto a CT image from a .dcm file set.
I've managed to extract the image and the dose pixel arrays myself using pydicom and dicom_numpy, but the two arrays are not the same size! So, if I overlay the two together, the dose will not be in the correct position based on what the Elekta Gamma Plan software exported it as.
I've played around with dicompyler and 3DSlicer and they obviously are able to do this even though the arrays are not the same size. However, I think I cannot export the numerical data when using these softwares.I can only scroll through and view it as an image. How can I overlay the RTDOSE to an CT image?
Thank you
for what you want it sounds like you should use Simple ITK (or equivalent - my experience is with sitk) to do the dicom handling, not pydicom.
Dicom has built in a complete system for 3D point and location specifications for all the pixel data in patient coordinates. This uses a bunch of attributes in the dicom files in the Image Plane Module set of tags. See here for a good overview.
The simple ITK library fully understands and uses the full 3D Image Plane tags to identify and locate any images in patient coordinates by default - irrespective of such things as the specific pixel spacing, slice thickness etc etc.
So - in your case - if you use SITK to open your studies, then you should be able to overlay them correctly "out of the box", because SITK will do all the work to parse the Image Plane Module tags and locate the data in patient coordinates - just like you get with 3DSlicer.
Pydicom, in contrast, doesn't itself try to use any of that information at all. It only gives you the raw pixel arrays (for images).
Note I use both pydicom and SITK. This isn't something bad about pydicom, but more a question of right tool for the job. In fact, for many (most?) things I use pydicom, but for any true 3D type work, SITK is the easier toolkit to use.
Example of tileset:
http://www.rpg-studio.org/wiki/images/9/92/Tileset.png
How to import these images into this grid in Xcode?
https://koenig-media.raywenderlich.com/uploads/2016/06/AdjacencyTileGrid.png
The problem is Xcode doesn't understand that there is a lot of subimages inside parent image.
I've already saw a lot of examples which use tiled map editor but it has its own format and you can't design such levels in Xcode's visual editor. So they are not appropriate for me.
I also saw that people always avoid to use tilesets - they somewhere get a lot of separate images instead and doesn't describe what to do with a single big tileset.
The simplest solution might be to just start with individual images that can feed into Xcode’s image handling pipeline.
My understanding of the Tilesets you’ve described is they are produced from individual images with a tool like TexturePacker which is then consumed by the Tiled Map Editor. The tmx maps produced by the Tiled Map Editor are consumed in Xcode using SKTiled for Swift or JSTileMap for Objective-C.
I want to generate different raster according to scale like Google Maps with gdal tool GDAL2TILES for my WEB GIS who run with OL3. For example,
scale 1 :
and the same place Scale 2 :
What you are looking for is a tile server to seeding the tiles for your openlayers application to consume. And the different image output at different zoom level in the same location is called raster pyramid.
http://webhelp.esri.com/arcgisdesktop/9.3/index.cfm?TopicName=Raster_pyramids
Back to your question, if you want to generate tiles, there are multiple solutions out there, many free open sourced ones are build based on GDAL as you mentioned such as GeoWebCache, MapProxy, etc.
You can seed/replace tile image at different zoom level to generate your own raster pyramids. And then you can either consume the Tile map service, or directly access the tiles via OpenLayers.
this question might be an "Open Question" and many of you might be eager to close it, but please don't. Let me explain.
As we all know, JPEG has two kinds of compression (at least in Photoshop save dialog)
optimized, where image was loaded kinda like line-by-line
progressive, where image was loaded first mosaic-like, the progressively better till the original resolution
I have read a lot of PNG/JPEG optimization articles before, but now I encountered this awesome third kind compression, from a wild random Google Image search. The JPEG in question is this
http://storage.googleapis.com/marc-pres/boston-event-1012/images/google-data-center.jpg
Try load the link in Chrome/Firefox (in IE/Safari only until the image was fully loaded then displayed)
you can observe:
image were loaded first in black & white
then looks like the Red channel loaded
next the Green channel loaded
last the Blue channel loaded
I tried loading it again with a emulated very slow connection, and observed that the JPEG is not only loads by channel order, but in progressive way as well. So first loaded image is blank-and-white mosaic then green-ish mosaic then gradually full color mosaic and finally full resolution and full color image.
This is amazing technology, suppose you are building an e-magazine, where each page has a lot of pictures, you want the user to fast flip browsing through pages, and this kind of image is exactly what works best. For fast preview, load blank-n-white thumbnail, if the user stays, fully load the original image.
So my question is: How could I generate such image using Python Pillow or ImageMagick, or any kind of open source software?
If you think this question is inappropriate please comment, don't just close it.
Update 1:
It turns out Google used this technology in all of its JPEG pictures 1, 2 e.g. this
Update 2: I found another clue
The image data in a JPEG file can be sliced up in many different ways, and the slices (or "scans" as they're usually called) can be stored in the file in many different orders.
In most JPEG files, the first scan in the file contains all of the image's color components, interleaved together if it is a color image. In a non-progressive JPEG, the file will contain just that one scan. In a progressive JPEG, other scans will follow, each of which may contain one component or multiple components.
But there's nothing that requires it to be done that way. If the first scan in the file does not contain all the color components, we might call such a file "non-interleaved".
Your examples files are non-interleaved, and they are also progressive. Progressive non-interleaved JPEGs seem to be more widely supported than non-progressive non-interleaved JPEGs.
The standard IJG libjpeg software is capable of creating non-interleaved files. Though it's not exactly easy, you can use its cjpeg utility, with the -scans option documented in the wizard.txt file.
I am creating a kind of 'map' in my app. This is basically only viewing an image with an imageView/scrollView. However, the image is huge. Like 20,000x15,000 px or something. How can I tile this image so that it fits? When the app tiles by itself, it uses way too much memory, and I want this to be done before the app I launched, and just include the tiled, not the original image. Can photoshop do this?
I have not done a complete search for this yet, as I am away, and typing on an iPhone with limited network connection..
Apple has a project called PhotoScroller. It supports panning and zooming of large images. However, it does this by pre-tiling the images - if you look in the project you will see hundreds of tiles for various zoom sizes. The project however does NOT come with any kind of tiling utility.
So what some people have done is create algorithms or code that anyone can use to create these tiles. I support an open source project PhotoScrollerNetwork that allows people to download huge jpegs from the network, tile them, then display them as PhotoScroller does, and while doing research for this I found several people who had posted tiling software.
I googled "PhotoScroller tiling utility" and got lots of hits, including one here on SO
CATiledLayer is one way to do it and of course the best if you can pre-tile the images downloading them from the internet (pay attention on how many connection you are going to open) or embedding them(increasing overall app size), the other is memory map the image on the file system (but an image with that res could take about 1GB), take a look at this question it could be an intersteing topic SO question about low memory scenario