Adding constraints to reusable UITableViewCell in Swift - ios

I'm working on an app which displays sports results for different sports. The ranking table should have a dynamic width for its content, based on its length. Currently it's all fixed width, but once one of the columns has a lot of text, it gets cropped at the end:
+-----------------------------+
| Goals Diff Sc |
+--------+-------+-------+----+
| Team 1 | 152 | 15... | 53 | // Should be 152:xxx
+--------+-------+-------+----+
| Team 2 | 146 | 14... | 53 | // Should be 145:xxx
+--------+-------+-------+----+
| Team 3 | 41 | 41... | 53 | // Should be 41:xxx
+--------+-------+-------+----+
Instead, it should analyze the maximum cell width for each individual column and adjust its width based on that. In case the cells become too wide, they should move left, cropping the team's name instead (in fact, it will word wrap if too small).
+-----------------------------+
| Goa Diff Sc |
+--------+-----+---------+----+
| Team 1 | 152 | 152:xxx | 53 |
+--------+-----+---------+----+
| Team 2 | 146 | 145:xxx | 53 |
+--------+-----+---------+----+
| Team 3 | 41 | 41:xxx | 53 |
+--------+-----+---------+----+
I yet have to decide what exactly to do with the cell headers, but that's another problem.
I have already tried computing the cell widths, comparing them to one another and then creating an array of the maximum width per column, which works as it should. However, it requires quite a lot of computing, as it manually creates labels for each column in every row, calls sizeToFit() on it and then comparing its width against the maximum width so far. I can't believe this is the only way to do it.
Then I use this array of widths on each tableView(:cellForRowAtIndexPath:) to add constraints to the dequeued reusable table view cells (and one of the three/how ever many it is labels). Again, having to create 3 constraints in each call of the method doesn't seem optimal to me.
Is there any easier way? Is there already a inbuilt function I'm missing (or perhaps a library for just that)?

Related

Setting spacing per breakpoint in Bootstrap 5

I thought it was possible to set spacing (margin/padding) per breakpoint in Bootstrap 5?
Something like mb-sm-2 doesn't seem to work... Should it...?
I have checked the docs but don't really understand what it's saying - it sounds like breakpoints are supported... 🤷‍♂️
Yes. Break points are supported in Bootstrap 5.
From the docs if you want to set margin or padding for extra small dimensions(<576px) .You use
<div class="mb-2"></div>
That though will affect all dimensions as it is not under any specified media query.
If you want to specify margin on small dimensions(≥576px) . You use
<div class="mb-sm-2"></div>
That will affect dimensions that are equal and greater than 576px. In case you want that to take place only on small dimensions. You use
<div class="mb-sm-2 mb-md-0"></div>
Dimensions that are equal to and greater than 768px and those less 576px will not have the specified margin.
Bootstrap 5 uses for breakpoints a different syntax comparing with previous versions.
The available breakpoints are as follows
| Breakpoint | Class infix | Dimensions |
|-------------------|-------------|------------|
| X-Small | None | <576px |
| Small | sm | ≥576px |
| Medium | md | ≥768px |
| Large | lg | ≥992px |
| Extra large | xl | ≥1200px |
| Extra extra large | xxl | ≥1400px |
Here is a particular example: in order to get 0 margin and 0 padding ONLY for screens with lower resolution as X-Small(<576px) use something like this:
class="m-0 p-0 m-sm-4 p-sm-4"
where m-0 and p-0 will be applied only for resolution <576px and m-sm-4 and p-sm-4 for ≥576px resolution.

Can logistic regression be used for variables containing lists?

I'm pretty new into Machine Learning and I was wondering if certain algorithms/models (ie. logistic regression) can handle lists as a value for their variables. Until now I've always used pretty standard datasets, where you have a couple of variables, associated values and then a classification for those set of values (view example 1). However, I now have a similar dataset but with lists for some of the variables (view example 2). Is this something logistic regression models can handle, or would I have to do some kind of feature extraction to transform this dataset into just a normal dataset like example 1?
Example 1 (normal):
+---+------+------+------+-----------------+
| | var1 | var2 | var3 | classification |
+---+------+------+------+-----------------+
| 1 | 5 | 2 | 526 | 0 |
| 2 | 6 | 1 | 686 | 0 |
| 3 | 1 | 9 | 121 | 1 |
| 4 | 3 | 11 | 99 | 0 |
+---+------+------+------+-----------------+
Example 2 (lists):
+-----+-------+--------+---------------------+-----------------+--------+
| | width | height | hlines | vlines | class |
+-----+-------+--------+---------------------+-----------------+--------+
| 1 | 115 | 280 | [125, 263, 699] | [125, 263, 699] | 1 |
| 2 | 563 | 390 | [11, 211] | [156, 253, 399] | 0 |
| 3 | 523 | 489 | [125, 255, 698] | [356] | 1 |
| 4 | 289 | 365 | [127, 698, 11, 136] | [458, 698] | 0 |
| ... | ... | ... | ... | ... | ... |
+-----+-------+--------+---------------------+-----------------+--------+
To provide some additional context on my specific problem. I'm attempting to represent drawings. Drawings have a width and height (regular variables) but drawings also have a set of horizontal and vertical lines for example (represented as a list of their coordinates on their respective axis). This is what you see in example 2. The actual dataset I'm using is even bigger, also containing variables which hold lists containing the thicknesses for each line, lists containing the extension for each line, lists containing the colors of the spaces between the lines, etc. In the end I would like to my logistic regression to pick up on what result in nice drawings. For example, if there are too many lines too close the drawing is not nice. The model should pick up itself on these 'characteristics' of what makes a nice and a bad drawing.
I didn't include these as the way this data is setup is a bit confusing to explain and if I can solve my question for the above dataset I feel like I can use the principe of this solution for the remaining dataset as well. However, if you need additional (full) details, feel free to ask!
Thanks in advance!
No, it cannot directly handle that kind of input structure. The input must be a homogeneous 2D array. What you can do, is come up with new features that capture some of the relevant information contained in the lists. For instance, for the lists that contain the coordinates of the lines along an axis (other than the actual values themselves), one could be the spacing between lines, or the total amount of lines or also some statistics such as the mean location etc.
So the way to deal with this is through feature engineering. This is in fact, something that has to be dealt with in most cases. In many ML problems, you may not only have variables which describe a unique aspect or feature of each of the data samples, but also many of them might be aggregates from other features or sample groups, which might be the only way to go if you want to consider certain data sources.
Wow, great question. I have never consider this, but when I saw other people's responses, I would have to concur, 100%. Convert the lists into a data frame and run your code on that object.
import pandas as pd
data = [["col1", "col2", "col3"], [0, 1, 2],[3, 4, 5]]
column_names = data.pop(0)
df = pd.DataFrame(data, columns=column_names)
print(df)
Result:
col1 col2 col3
0 0 1 2
1 3 4 5
You can easily do any multi regression on the fields/features of the data frame and you'll get what you need. See the link below for some ideas of how to get started.
https://pythonfordatascience.org/logistic-regression-python/
Post back if you have additional questions related to this. Or, start a new post if you have similar, but unrelated, questions.

In data warehouse, can fact table contain two same records?

If a user ordered same product with two different order_id;
The orders are created within a same date-hour granularity, for example
order#1 2019-05-05 17:23:21
order#2 2019-05-05 17:33:21
In the data warehouse, should we put them into two rows like this (Option 1):
| id | user_key | product_key | date_key | time_key | price | quantity |
|-----|----------|-------------|----------|----------|-------|----------|
| 001 | 1111 | 22 | 123 | 456 | 10 | 1 |
| 002 | 1111 | 22 | 123 | 456 | 10 | 2 |
Or just put them in one row with the aggregated quantity (Option 2):
| id | user_key | product_key | date_key | time_key | price | quantity |
|-----|----------|-------------|----------|----------|-------|----------|
| 001 | 1111 | 22 | 123 | 456 | 10 | 3 |
I know if I put the order_id as a degenerate dimension in the fact table, it should be Option 1. But in our case, we don't really want to keep the order_id.
Also I once read an article that says that when all dimensions are filtered out, there should be only one row of data in the fact table. If this statement is correct, the Option 2 will be the choice.
Is there a principle where I can refer ?
Conceptually, fact tables in a data warehouse should be designed at the most detailed grain available. You can always aggregate data from the lower granularity to the higher one, while the opposite is not true - if you combine the records, some information is lost permanently. If you ever need it later (even though you might not see it now), you'll regret the decision.
I would recommend the following approach: in a data warehouse, keep order number as degenerate dimension. Then, when you publish a star schema, you might build a pre-aggregated version of the table (skip order number, group identical records by date/hour). This way, you can have smaller/cleaner fact table in your dimensional model, and yet preserve more detailed data in the DW.

google sheets if string matches and is last matching string in set of cells get value of cell nest to it

below is an example of the data in the table
+--------------+------+---------+
| Expense Name | Cost | mileage |
+--------------+------+---------+
| Costco Gas | 20 | 145200 |
| marathon gas | 2 | 145500 |
| oil change | 35 | 145600 |
| marathon gas | 25 | 145750 |
| A/C Work | 305 | 145800 |
| oil change | 36 | 150000 |
+--------------+------+---------+
Whenever the "Expanse Name" string equals "oil change" and it has the highest Mileage from the corresponding mileage I want that mileage to appear in a separate column.
So with this data I would search through the "Expense Name" column and find two that matched the string. From those two I want the one with the higher mileage(150000) to appear.
Another method that doesn't require dragging or array formulae is
=MAX(FILTER(C2:C, A2:A = "oil change"))
let us say expense name is in A1.
In D2 put the formula =COUNTIF("oil change",A2)*C2.
Grab the lower right hand corner handle of the cell and drag it down to copy throughout your data set (in your case D7).
One cell below (D8 in your example), say =MAX of the above cells, so in your case =MAX(D2:D7).
That cell contains your answer.

Highlighting duplicates over multiple rows using conditional formatting

I'm trying to create a Google spreadsheet to organise a seating plan. I've laid out the page with 11 separated smaller tables in a kind of grid format (for easier reading, as you can see it all without scrolling). Like this:
Table 1 | Table 2 | Table 3 |
Seat | Name | Diet | Seat | Name | Diet | Seat | Name | Diet |
1 | | | 1 | | | 1 | | |
2 | | | 2 | | | 2 | | |
I'm trying to create a conditional format to highlight cells where a name appears more than once.
So I created a conditional format with a custom formula as follows (the range covers the all tables):
COUNTIF($A$1:$O$42, A1)>1;
and I set the range to only the Name cells on the page.
However when I purposely set a duplicate the cells are not highlighted.
I thought that maybe my formula was wrong, so I put into a cell and pointed it at a duplicate cell and it returned TRUE.
So I'm at a loss a how to get it working.
Try this formula applied to range A3:O
=AND(ISTEXT(A3),COUNTIF($A$3:$O$42, A3)>1)
Example sheet is here: https://goo.gl/hChZbt

Resources