How to set random data in brick-masonry layout in react native iOS application - ios

I am working on a react native application, where I have user "react-native-masonry-brick-list" library for brick view, data is arranged in brick view if user set size of articles in predefine manner, such as I have consider 4 view ratio, which are 100%, 75%, 50% and 25%.
if user pass 1st article 25%, and 2nd article 75%, than 3rd article 100% and son on, than list will be in proper manner.
whereas if user randomly set size than there will be some space left.
how to arrange data in such manner that there only remain space left in last block.
example of predefine size in brick
randomly set size for brick, there is space left in brick
randomly arrange data

AS you have used react-native-masonry-brick-list library, it has no option of arranging random data in brick format, you can opt for two options,
fetch data on app than apply sorting over it or
by applying sorting at your backend
this will be a temporary fix

Related

What software is recommended to automate image annotation?

We make images like the following in Excel. The raw image is imported and positioned in the generally correct area within the annotations, which are themselves images linked to ranges, the contents of which differ depending on selections made by the user.
The absolute position and dimensions of each annotation must be adjusted manually for every image. The number of sample names can vary (up to 12 lanes of samples). The size ladder on the left can also vary depending on the size of the protein being analyzed.
After everything is correctly sized and aligned, the range containing the raw image + annotations is copied and saved as a jpg (which is then imported into an Access database).
Though we've automated some parts of this with VBA, the process of tweaking every image (widths of columns, text size, position of size ladder, etc.) can get very tedious. Surely there is some software out there that will make this process more efficient. It takes one of our staff members hours to adjust and finalize about 10-20 of these images.
Any recommendations are welcomed.
This procedure is called electrophoresis. Samples (in this case proteins) are loaded into a polyacrylamide gel (each sample in its own "lane") and pulled through the gel with electricity. This process separates all of the proteins in each lane by size and charge.
The "ladder" is a solution of various proteins of known size. It's used to determine the sizes of the proteins in the other lanes.
The image on the left contains the range of sizes in the ladder (in this case 10, 15,...150, 200). Each "step" in the ladder image is aligned with the black bands that appear in the ladder lane in the experiment (the actual ladder lane that contains the black bands is not present in this case...it's cropped post-alignment to improve the overall look of the image).
The images on the right are protein names and point to the location on the gel where that particular protein should appear. The protein Actin, for example, is supposed to come out at around 42 kilodaltons. The fact that there is a prominent black band in that location is good supporting evidence that this sample contains Actin protein.
Many gels will also describe the sample source at the top or the bottom. So, for example, if the sample in lane 1 was derived from mouse liver cells, lane 1 would be annotated as "mouse liver."
The raw image is captured in the lab and is saved as a jpg. This jpg is then manually copied directly into an Excel sheet, where extraneous parts of the image are cropped. The cropped image is then moved to within the area of the worksheet that contains the annotations (ladder, protein names, sample names). These annotations are themselves images (linked to other ranges in the workbook that change with every experiment...protein names, samples names, ladder type can be different for every experiment). These annotation images require fine positioning in each case (as described previously) to align with the lanes and with the protein sizes. Once everything is aligned, it is saved as a jpg and moved into Access.
My question is...Is there software already out there designed specifically for tasks like these? Just as Excel is not a database program, it is also not an image annotation program. I want to know if there is an application out there, ready to go, that is specifically designed to annotate images with elements that can vary from image to image.
Of course, there will still be a need for manually moving elements around the image to get everything aligned (I'm not looking for a miracle here). I'm thinking that there has to be something better than Excel for this.

Can I save ar data for reuse?

My goal is to place a object on a arcore plane in a room, then I save the plane and object's data in file. After app exit and start again, the object had been saved can be loaded from file, then it can be displayed at the same position just like last time.
To persist virtual objects, we probably can use VPS (visual positioning service, not released yet) to localize the device with an room.
However there's no API to achieve this in the developer preview version of ARCore.
You can save anchor positions in ARCore using Augmented Images.
All you have to do is place your objects wherever you want go back to an/more Augmented Images and save positions of corners of your Augmented Images into a text or a binary file in your device.
Then in the next Session, lets say you used one Augmented Image and 4 points(corners of the image), you load these positions and calculate a Transformation Matrix between two sessions using these 2 group of 4 points which are common in each Session. The reason why you need this is due to the fact that ARCore's coordinate system changes in every session depending on device's initial position and rotation.
At the end, you can calculate positions and rotations of anchors in new session using this Transformation Matrix. It will be placed at the same physical location with an error margin caused by accuracy of Augmented Image tracking. If you use more points this error margin will be relatively lower.
I have tested this with 4 points in each group and it is quite accurate considering my anchors were placed on Arbitrary locations not attached to any Trackable.
In order to calculate the Transformation Matrix you can refer to this

Most efficient way to draw a huge number of tiny dots in iOS?

I am entirely new to iOS, and for my first project I am trying to make a chaos game app, which naturally requires a large number of points be drawn. Ideally, I need 10^5 to 10^6 individual pixel points for the illustrations.
This is a performance concern. Right now, I am using CGRect with very small height and width, is there a less expensive way to do this?
I dont need to draw all of the dots at once, but I do need them to appear on screen over a reasonable period of time to show the progression of the emerging fractal.

Scrolling a line graph horizontal in IOS Charts

I am using IOS Graphs to develop an application that represents set of data on a line graph. I am trying to represent large sets of data on the graph however due to the small width of the iPhone screen this leads to the graph getting compressed and looking ugly, most of the labels also disappear. Is there a way to create a large line graph that would support all the data sets without compressing the graph and have the user scroll horizontally to view more portions of the graph?
I was thinking that a horizontal scroll viewer could do the job however Swift doesn't have such object, only a vertical scroll viewer. Any ideas?
Thank you

Core Plot IPad performance issue

In my app, i got core plot bar chart on a scroll view with paging, on IPhone all work fine , you page between different pages, one of them is the plot with its own touch gestures and properties.
Problem starts when i run the same code on IPad. plot becomes slow and laggy, all touch gestures takes a lot of time to response and the whole scroller paging becomes heavy and slow.
the chart itself contains 100 points or so (not so big).
I've read somewhere that the change of plot space between IPhone and IPad makes these changes in performance because the IPad renders 4 times the graphics. Did anybody had this problem before? Is there something i can do to make performance better on IPad without limit or lose preform data?
Unfortunately Core plot is a very slow library which can only handle a few hundred data-points (or less in some cases).
I wrote an answer here which describes a performance comparison between iOS chart components. One of the charts tested was core-plot and it couldn't do the first test!
Without knowing the specifics of your app, here are some general performance hints:
Set all line styles and fills that you don't need to nil rather than transparent colors.
Use solid color fills rather than gradients or images where possible.
Reduce the number of axis labels, tick marks, and grid lines if possible. Perhaps eliminate minor tick marks and grid lines completely (set the corresponding line styles to nil).
Only call -reloadData when a significant portion of the plot data changes. Use the insert and delete methods when possible. See the "Real Time Plot" in the Plot Gallery example app.

Resources