I often need to add text to an image. To do this, I use the Text Tool. When I click on the image, I can start filling in the text. However, the box that the text is in always shows up as transparent. There are times where this is good, but many times I want black text on a white box. How can I set the color of the box the text is inside?
You can just bucket fill the layer (no selection) after putting the bucket tool in Behind mode. But this makes the text layer no longer a text layer (text and font information is lost).
So a better solution is to add a layer under the text later and bucket-fill, with two options:
Make a layer the right size (when you create the layer it takes by default the side of the active layer) and bucket-fill the whole layer
Make an image-size layer, make a rectangle selection and bucket-fill the selection
Note that bucket-filling the text layer (or a layer which is the exact same size) usually won't look good because the boundaries of the layer come from the font geometry (so you can stack/abut layers) and there is a lot more space on top and bottom than on the sides.
The attention weights are computed as:
I want to know what the h_s refers to.
In the tensorflow code, the encoder RNN returns a tuple:
encoder_outputs, encoder_state = tf.nn.dynamic_rnn(...)
As I think, the h_s should be the encoder_state, but the github/nmt gives a different answer?
# attention_states: [batch_size, max_time, num_units]
attention_states = tf.transpose(encoder_outputs, [1, 0, 2])
# Create an attention mechanism
attention_mechanism = tf.contrib.seq2seq.LuongAttention(
num_units, attention_states,
memory_sequence_length=source_sequence_length)
Did I misunderstand the code? Or the h_s actually means the encoder_outputs?
The formula is probably from this post, so I'll use a NN picture from the same post:
Here, the h-bar(s) are all the blue hidden states from the encoder (the last layer), and h(t) is the current red hidden state from the decoder (also the last layer). One the picture t=0, and you can see which blocks are wired to the attention weights with dotted arrows. The score function is usually one of those:
Tensorflow attention mechanism matches this picture. In theory, cell output is in most cases its hidden state (one exception is LSTM cell, in which the output is the short-term part of the state, and even in this case the output suits better for attention mechanism). In practice, tensorflow's encoder_state is different from encoder_outputs when the input is padded with zeros: the state is propagated from the previous cell state while the output is zero. Obviously, you don't want to attend to trailing zeros, so it makes sense to have h-bar(s) for these cells.
So encoder_outputs are exactly the arrows that go from the blue blocks upward. Later in a code, attention_mechanism is connected to each decoder_cell, so that its output goes through the context vector to the yellow block on the picture.
decoder_cell = tf.contrib.seq2seq.AttentionWrapper(
decoder_cell, attention_mechanism,
attention_layer_size=num_units)
Given following view hierarchy:
root (e.g. view of a view controller)
|_superview: A view where we will draw a cross using core graphics
|_container: Clips subview
|_subview: A view where we will show a cross adding subviews, which has to align perfectly with the cross drawn in superview
|_horizontal line of cross
|_vertical line of cross
Task:
The crosses of superview and subview have to be always aligned, given a global transform. More details in "requirements" section.
Context:
The view hierarchy above belongs to a chart. In order to provide maximal flexibility, it allows to present chart points & related content in 3 different ways:
Drawing in the chart's base view (superview) draw method.
Adding subviews to subview. subview is transformed on zoom/pan and with this automatically its subviews.
Adding subviews to a sibling of subview. Not presented in view hierarchy for simplicity and because it's not related with the problem. Only mentioning it here to give an overview. The difference between this method and 2., is that here the view is not transformed, so it's left to the implementation of the content to update "manually" the transform of all the children.
Maximal flexibility! But with this comes the cost that it's a bit tricky to implement. Specifically point 2.
Currently I got zoom/pan working by basically processing the transforms for superview core graphics drawing and subview separately, but this leads to redundancy and error-proneness, e.g. repeated code for boundary checks, etc.
So now I'm trying to refactor it to use one global matrix to store all the transforms and derive everything from it. Applying the global matrix to the coordinates used by superview to draw is trivial, but deriving the matrix of subview, given requirements listed in next section, not so much.
I mention "crosses" in the view hierarchy section because this is what I'm using in my playgrounds as a simplified representation of one chart point (with x/y guidelines) (you can scroll down for images and gists).
Requirements:
The content can be zoomed and panned.
The crosses stay always perfectly aligned.
subview's subviews, i.e. the cross line views can't be touched (e.g. to apply transforms to them) - all that can be modified is subview's transform.
The zooming and panning transforms are stored only in a global matrix matrix.
matrix is then used to calculate the coordinates of the cross drawn in superview (trivial), as well as the transform matrix of subview (not trivial - reason of this question).
Since it doesn't seem to be possible to derive the matrix of subview uniquely from the global matrix, it's allowed to store additional data in variables, which are then used together with the global matrix to calculate subview's matrix.
The size/origin of container can change during zoom/pan. The reason of this is that the labels of the y-axis can have different lengths, and the chart is required to adapt the content size dynamically to the space occupied by the labels (during zooming and panning).
Of course when the size of container changes, the ratio of domain - screen coordinates has to change accordingly, such that the complete original visible domain continues to be contained in container. E.g if I'm displaying an x-axis with a domain [0, 10] in a container frame with a width of 500pt, i.e. the ratio to convert a domain point to screen coordinates is 500/10=50, and shrink the container width to 250, now my [0, 10] domain, which has to fit in this new width, has a ratio of 25.
It has to work also for multiple crosses (at the same time) and arbitrary domain locations for each. This should happen automatically by solving 1-7 but mentioning it for completeness.
What I have done:
Here are step-by-step playgrounds I did to try to understand the problem better:
Step 1 (works):
Build hierarchy as described at the beginning, displaying nothing but crosses that have to stay aligned during (programmatic) zoom & pan. Meets requirements 1, 2, 3, 4 and 5:
Gist with playground.
Particularities here:
I skipped container view, to keep it simple. subview is a direct subview of superview.
subview has the same size as superview (before zooming of course), also to keep it simple.
I set the anchor point of subview to the origin (0, 0), which seems to be necessary to be in sync with the global matrix.
The translation used for the anchor change has to be remembered, in order to apply it again together with the global matrix. Otherwise it gets overwritten. For this I use the variable subviewAnchorTranslation. This belongs to the additional data I had in mind in the bullet under requirement 5.
Ok, as you see everything works here. Time to try the next step.
Step 2 (works):
A copy of step 1 playground with modifications:
Added container view, resembling now the view hierarchy described at the beginning.
In order for subview, which is now a subview of container to continue being displayed at the same position, it has to be moved to top and left by -container.origin.
Now the zoom and pan calls are interleaved randomly with calls to change the frame position/size of container.
The crosses continue to be in sync. Requirements met: All from step 1 + requirement 6.
Gist with playground
Step 3 (doesn't work):
So far I have been working with a screen range that starts at 0 (left side of the visible playground result). Which means that container is not fulfilling it's function to contain the range, i.e. requirement 7. In order to meet this, container's origin has to be included in the ratio calculation.
Now also subview has to be scaled in order to fit in container / display the cross at the correct place. Which adds a second variable (first being subviewAnchorTranslation), which I called contentScalingFactor, containing this scaling, that has to be included in subview's matrix calculation.
Here I've done multiple experiments, all of them failed. In the current state, subview starts with the same frame as container and its frame is adjusted + scaled when the frame of container changes. Also, subview being now inside container, i.e. its origin being now container's origin and not superview's origin, I have to set update its anchor such that the origin is not at (0,0) but (-x,-y), being x and y the coordinates of container's origin, such that subview continues being located in relation to superview's origin. And it seems logical to update this anchor each time that container changes its origin, as this changes the relative position from content's origin to superview's origin.
I uploaded code for this - in this case a full iOS project instead of only a playground (I thought initially that it was working and wanted to test using actual gestures). In the actual project I'm working on the transform works better, but I couldn't find the difference. Anyway it doesn't work well, at some point there are always small offsets and the points/crosses get out of sync.
Github project
Ok, how do I solve this such that all the conditions are met. The crosses have to stay in sync, with continuous zoom/pan and changing the frame of container in between.
The present answer allows for any view in the Child hierarchy to be arbitrarily transformed. It does not track the transformation, merely converts a transformed point, thus answers the question:
What are the coordinates of a point located in a subview in the coordinate system of another view, no matter how much that subview has been transformed.
To decouple the Parent from the clipping Container and offer a generic answer, I propose place them at the same level conceptually, and in different order visually (†):
Use a common superview
To apply the scrolling, zooming or any other transformation from the Child to the Parent, go through common superview (named Coordinator in the present example).
The approach is very similar to this Stack Overflow answer where two UIScrollView scroll at different speed.
Notice how the red hairline and black hairline overlap, regardless of the position, scrolling, transform of any and all off the views in the Child hierarchy, including that of Container.
↻ replay animation
Code
Coordinate conversion
Simplified for clarity, using an arbitrary point (50,50) in the coordinate system of the Child view (where that point is effectively drawn), and convert it in the Parent view system looks like this:
func coordinate() {
let transfer = theChild.convert(CGPoint(x:50, y:50), to: coordinator)
let final = coordinator.convert(transfer, to: theParent)
theParent.transformed = final
theParent.setNeedsDisplay()
}
Zoom & Translate Container
func zoom(center: CGPoint, delta: CGPoint) {
theContainer.transform = theContainer.transform.scaledBy(x: delta.x, y: delta.y)
coordinate()
}
func translate(delta: CGPoint) {
theContainer.transform = theContainer.transform.translatedBy(x: delta.x, y: delta.y)
coordinate()
}
(†) I have renamed Superview and Subview to Parent and Child respectively.
I have a custom table view and want it to look like this...
(source: pulsewraps.co.uk)
The image is loaded via async and the two lines come from two different arrays. I can get all the data in fine I just don't know how to lay it out.
I want:
the black gradient to overlay the image
the two lines of text to be within the black gradient box
the image to fill the table row to cover it and keep it's aspect ratio
the black gradient box to be pinned/constrained to the bottom of the image so that is either line of text is larger than two lines it covers more of the image and doesn't drop below it.
I fill the table data in a loop according to the number of records in my array which is populated by json.
I have managed to do the layout in android but can't get my head around ios.
Any help much appreciated.
If you're using autolayout, you'll want to constrain the labels to the bottom and to each other. Then put the gradient view behind the labels and constrain the top of the gradient to the top of the top label.
You'll have to handle drawing the gradient yourself, either use an image in an image view and set it to scale to fill, or subclass UIView and add a little bit of code to drawRect: The first is probably easier, the second will produce a more uniform gradient if it has to be scaled.
Hello,
Looking for insight from the gurus.
I need a stack of 3 views:
A: View containing various buttons, text inputs.
B: Custom view containing a large view (B2) that moves and rotates and contains multiple subviews and another fixed view (B1) that has a custom gradient transparency mask.
C: Image view containing a background image
I can't figure out how to get the B1 layer working. I can make the gradient but am unsure on how to apply the mask so it only affects the transparency of the subviews of B2. I need the background (C) to show through all the way to the top view (A). Was thinking of using a mask directly on B2, but can't since it is moving around. Confused.
Any advice?
Have you tried using the mask layer of the B's layer? Try to set it to B1 (not as a subview/sublayer) and update its position as necessary when moving B2.