Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 9 years ago.
Improve this question
I've a transparent PNG image i'm using for a button on a web page. I'd like to be able to replace the images text with some other text using Gimp 2. Any idea's how i can easily achieve this? I'm a complete novice to GIMP!
Your button has a top-down gradient but it should be easy to do the trick.
To get the job done, I'll suggest you to follow these steps:
obtain a template button, i.e. a button without text.
modify the template at will, to get button with any kind of text
addressing point 1
Please notice that the gradient of the button is vertical and homogeneous. This really simplify the work: simply, drag a little selection on the left side of the button big enough to cover the height of the text, as shown below
then you have to copy and paste the selection in a new layer (L1). Move the layer L1 to the right (use the arrow key to preserve vertical alignment) to cover the white text, then copy the L1 to a new layer (L2). Now you have another layer to put on the side of L1 to cover another piece of text. Iterate until the text is all covered. Now you can merge all the layers.
addressing point 2
You have your button template! The last thing to do is to add the text. Choose the text tool and position the cursor in the middle of the button to write the new label. I tried to reproduce the shadow effect of the original text by setting an internal shadow with 50% of opacity, 90 degrees direction and 1px of distance.
Here you can find the png template. I did this tutorial in PS for it is the program I have at hand.
You cant. PNG Text can't be changed when exported.
Create a Button.
Add Layers for each text you want to use.
Deactivate the the layers except the one needed and export.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I was playing with iOS built-in Compass app and the UI do make me curious.
Here is the interesting part:
The color of text (even the circle) can be partially and dynamically changed.
I have made a lot of search, but the results are all about attributed string. How to implement the effect like this?
Edited:
I have tried to add two UILabels (whiteLabel and blackLabel) as whitelabel at the bottom and blackLabel at the top with the same frame. Then I set the circle as the mask of blackLabel.
The problem is 'whiteLabel' is totally covered by blackLabel, and if the circle do not intersect with 'blackLabel', both labels are not visible.
I imagine that there are two "14" labels in the same place. The bottom one is white and unmasked, and the top one is black and has a layer mask that contains two circles, so it's only visible where the circles are.
Achieving this has most probably nothing to do with NSAttributedStrings, like Woodstock said.
I'd say it's the UILabel's layer that is recolored live, depending on what other layer it intersects with, and the overlaying area of said intersection.
Once you figure those common points, you just apply a mask that inverts colors from there.
Now it's a little bit more complicated than that since there appears to be two circles (hence two layers to find intersections with), but in the end, it's "just a list of coordinates" that the label's coordinates intersects or not.
That could be an interesting exercise ; it would probably take me a decent amount of tries to mimic that behaviour, but I'm pretty confident my reasoning is on point. (get it? :o)
At a high level, we’re trying to understand the best way for Framer to render a layer with a dynamic height.
We’re importing our designs into Framer from Sketch. We’re building a simple question and answer page. Each page has 1 question and the user will work through 20 questions, one question page after another.
Here’s a rough version of the design:
The header, footer, and prompt are always the same height.
The question and answers section change based on the question and answers we want to render.
We see three possible solutions:
Create 20 different sketch templates/cards, each one representing a rendering of a different question (for example card1 would be listing question1 with answers1). When the first question is answered we render the next sketch card to show question2 with answers2. To be clear, I’m suggesting that each card contains everything: the header, question, prompt, answers, and footer.
Have 1 sketch template/card that is the full rendering of the header, question1, prompt, answers1, and footer. Have a bunch of mini template/card layers that are also imported into Framer. There would be a layer for each question and a layer for each set of answers. When Framer tries to load question 2, it would load the full rendering of question1 with answers1 (and header,footer,etc), and swap the layer containing question1 with question2, and swap the layer containing answers1 with answers2. There’s a problem, question2 is smaller than question1, so now there’s a large white-space between the questions and prompt layer. So, programmatically go through the layers in Framer and properly align them using align.
Have several small layers (header, question1, question2, …, question20, prompt, answer1,...,answer20, footer) and using Framer build the page by adding each layer one after another. Also write CSS to match the design pixel perfect.
Problems:
Our issue with #2 is there’s a lot of upfront work to reposition ‘all the layers’. As far as we can tell, Framer copies all the sketch layers and positions them based on their absolute position with x and y coordinates. Since all layers are positioned absolutely it would have to go through each layer and re-apply align depending on how they’re positioning in reference to eachother. This really seems cludgey and as far as we can tell there’s no helper function in Framer to help achieve this. Sure, we can write something that works through the parent tree and re-aligns siblings, but this feels way too error prone and not in the spirit of building prototypes with Framer.
Our issue with #3 is this just feels like building the website and this doesn’t feel like prototyping. There’s a lot of pixel pushing since we can’t directly import the pixel-perfect sketch design. We want to quickly update the prototype when designs change. We expect the design will change frequently because we’re running a lot of user tests and getting great feedback. This method would have me spend considerable amounts of time churning each time a design is changed to ‘redesign’ my prototype.
Thoughts? Are we missing something obvious?
Thanks for reading and thanks for your time.
I don't know how much chrome you have around the cards, and if you want to define the margin between the layers inside Sketch, but writing a function to align a list of layers with variable height on top of each other doesn't actually have to be that hard:
stackLayers = (layerList, startY = 0, spacing = 0) ->
currentY = startY
for layer in layerList
layer.y = currentY
currentY = layer.maxY + spacing
Full example here: http://share.framerjs.com/sztcm5e9l4li/
If you keep the names of your Sketch layers containing the cards the same, you can re-import from Sketch and the repositioning will keep working, even if the size changes.
Because the structural tree of groups in your Sketch file is contained inside Framer, you can place all the layers you want to stack on top of each other in one group and call stackLayers(sketch.nameOfGroup.children, 0, 10) to stack all children of 'nameOfGroup' on top of each other.
We ended up solving this with approach #2. In summary: swap the layer and reposition all the layers below, also go through all the parents and resize as necessary (for example if the new layer is smaller than the old layer we're swapping with we'd need to shrink the parent layers by the difference).
The general approach was:
Use TextLayer to create a new layer with the new question text
Measure the size difference between the original sketch question text layer and our new TextLayer
Resize the parent layers based on the difference calculated in #2
Remove the original sketch layer
Insert the new TextLayer in the same x and y coordinates as the original sketch layer
Code gist: https://gist.github.com/MrNickBreen/5c2bed427feb8c701d5b6b1fbea11cb4
Special thanks to Niels for the suggestion which helped me figure this out.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I'm building a 2D game using SpriteKit and this is what I would like to achieve.
Imagine a vertically scrolling SKSpriteNode which represents a tall building. Building is represented using simple image and has a physics body set with + (SKPhysicsBody *)bodyWithTexture:(SKTexture*)texture size:(CGSize)size; (introduced with iOS 8) so it is closely following the building's path.
Some parts of the building are special. Colliding with those parts should be yielding a special collision action. For example, touching the wall of the building would fire an action 1 but touching any of the windows would fire an action 2.
What I haven't been able to do is in some way define those "special blocks" of the building.
I was thinking about making some kind of a "Collision Map" for each of the building's sprite images which would be basically a transparent image with non-transparent blocks determining a collideable parts of the building. Simple example shown bellow (left: building image, right: collision map image):
The problem with this approach is that when setting a SKPhysicsBody on a "Collision Map" image like the one above, the body is not applied to all blocks but it wraps around just one of those separate blocks. In other words: one physics body can be applied to only one, continuous block in image.
To conclude, I would like to know which approach are you using when determining non-continuous collision maps.
P.s.: building's SKSpriteNode is represented with multiple unique texture images which are scrolling vertically, one after another.
Thank you in advance.
Just an idea:
Can't you use two Sprites for the building which are positioned at the same place:
- one represents the physics body of your building (the left one from your image)
- invert your collision map image to get a single physics body block. The special areas should overlap the non special area by one pixel
Hope you understood what I mean. It's just an idea
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I am interested in making path with finger (like connecting objects) between two elements. I am not sure how would I start on this.
I know that I could use Bezier path to create lines, but I am not sure how to create that line with finger. Does anyone have some good example?
I tried to google it, but I can't find anything like that made.
Thanks
I answered a question recently about slow/laggy performance on a similar setup. I.E. Drawing UIBezierPaths in CALayer. The answer contains a subclass of UIView which you can drop into a storyboard and will pretty much get you started. The header file is not shown in the answer, but it is literally a subclass of UIView (just add a UIView Subclass to your project). You should be able to copy the rest into your implementation file. Obviously you'll want to take out the performance testing code.
touchesMoved drawing in CAShapeLayer slow/laggy
If you simply want to Add a single line, you just need to get the starting point in touchesBegan, and build the path in touchesMoved. The commitCurrentRendering simply renders the touch points accumulated, then clears the UIBezierPath. This improves the performance as there wass a notable slowdown when UIBezierPath reached around 2000 points (touchesMoved will feed you a succession of points as your finger moves).
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking for code must demonstrate a minimal understanding of the problem being solved. Include attempted solutions, why they didn't work, and the expected results. See also: Stack Overflow question checklist
Closed 9 years ago.
Improve this question
I have a bunch of UIImages that consist of black and white book pages. I want to be able to "crop" or "cut" the image based on where the page ends (Where the white space begins). To illustrate what I mean, look at the image below. I want to crop the image programatically right after the word "Sie".
I am not sure how to go about this problem. I have gave it some thought however, perhaps detecting where the black pixels stop since it will always be black and white but not sure how to properly do this. Can anyone offer any insight or tell me how this may be done?
Thank you so much!
Here's some python code I wrote using the Python Imaging Library. It finds the lowest black pixel, and then crops the image five pixels down from that y value.
import Image
img = Image.open("yourimage.fileformat")
width,_ = img.size
lowestblacky = 0
for i, px in enumerate(img.getdata()):
if px[:3] == (0, 0, 0):
y = i/width
if y > lowestblacky:
lowestblacky = y
img.crop((0,0,width,lowestblacky+5)).save("yourimagecropped.fileformat")
Python is available on nearly all operating systems, so I hope you'll be able to use this. If you want to crop the image right after the last black pixel, simply remove the "+5" from the last line, or change the value to your liking.
Convert the image into pixel values. Start with the bottom line of pixels and look for black pixels. If some are found, the line is the end of the image. If there is no black pixel, continue to the next line of pixels.
Actually, "black" pixel is probably too simplified. A good solution would use a limit of gray (every image has some noise) or even better, a change in pixel contrast. You can also apply an image filter (e.g. Edge detection or sharpen) before starting the detection algorithm.