I’m trying to define a placeholder for UITextView, but unfortunately unlike UITextField it doesn’t have that attribute. In order to speed up my development process I’ve made some search and found a library (https://github.com/MoZhouqi/KMPlaceholderTextView) that solves that problem.
My problem starts with the implementation of methods/funcionalities of another library (https://github.com/SwiftValidatorCommunity/SwiftValidator) which is used for fields validation. Since I’m using a UITextView subclass the validator doesn’t recognize it as a validatable field and requires it to be cast as a validatable field. But once I cast it, the app crashes when trying to validate the KM TextView.
My question is, is there any way to make the 2 libraries interact and make them compatible with each other? If so, how can I make this?
Both libraries are under the MIT License. They are just source code that compiles and runs. You can pull them down into your projects and edit the code to get it to do whatever you like. Its just code. Don't be afraid of it (although I can understand why it would seem like a daunting task).
The problem that would then happen would be you maintaining those new libraries over time. If you carefully make and minimize your changes, you should be able to rebase them with any changes made to the master repositories. You can set up your own git repository and keep your changes on there.
Related
i am new to Unity and i am try to understand plugins. I have got the difference between a managed plugin and a native plugin, but what is not very clear to me is:
what is the difference between a plugin and a dll? what should i expect to find in an sdk to make it usable in my unity project?
Thanks a lot
To expand on #Everts comment instead of just copying it into an answer, I'll go a little into details here
What is a plugin?
It's a somewhat vague word for a third-party library that is somehow integrated with the rest of your game. It means that it neither is officialy supported by Unity, nor is it a part of your core code. It can be "plugged" in or out without altering its internals, so it must provide some kind of API that can be used by the game code.
For example, you'll find many plugins that handle external services like ads, notifications, analytics etc. You'll also find a couple of developer-tools that can also be called plugins, like tile-based map editors and such.
Plugins come in many forms - DLL files are one example but some plugins actually provide full source code for easier use. And of course, other plugins will provide native code for different platforms, like Objective-C for iOS or .jars for Android.
So to answer your first question:
DLL is simply a pre-compiled source file that can be a part of a plugin
A plugin is a whole library that can consist of multiple files with different formats (.cs, .dll, .jar, .m etc)
What do you need to use an sdk?
First of all - documentation. Like I said before, and like you noticed yourself, not all plugins give you access to the source code. And unfortunately, not many sdks have extensive and developer-friendly documentations so it can be a tough task to actually understand how to use a given sdk.
Secondly - the code. Many sdks give you some kind of "drag & drop" library, a single folder with all the neccessary files inside that you simply add to your Unity projects. I've also seen sdks that use Unity packages that you have to import via Assets > Import Package > Custom Package.
Once you have the code and documentation it's time to integrate it with your game. I strongly recommend using an abstract lyer in your game as, in my experience, you often have to change sdks for various reasons and you don't want to rewrite your game logic every time. So I suggest encapsulating sdk-related code in a single class so that you have to change only one class in your code when switching from, say, one ad provider to another (and keep the old class in case you need to switch back).
So you basically need three things:
Documentation (either a readme file or an online documentation)
The code (precompiled or source)
A versatile integration
We need to use some 3rd-party libraries in our iOS app project (which is an Xcode project). The 3rd-party libraries are either managed by Cocoapods or directly added in the project by importing source code files.
Sometimes we need to modify a library's source code in order to actually use it appropriately in our project. It's easy to make the modification since we have all the source code at our hand, but how can we maintain the modification when some time in the future we upgrade the version of the 3rd-party library? Is there any tools or best practice out there that can help with this? And after the upgrade of the 3rd-party library, it will be good if we could differentiate in the code history about which part is done by library upgrade and which part is done by our modification.
That depends entirely on what your change is...
The best option is to not change the external code. Instead, subclass the part you need to change and use only public API so you should find out early when updating if the subclass needs to change. Unit tests also help.
If you can't subclass then you should fork the external code, then you have to manually merge updates on top of your change when you want them. This is obviously significantly more effort both in terms of importing and managing the code.
I have a large application to manage consisting of of three or four executables and as many as fifty .dlls. Many of the source code files are shared across many of the projects.
The problem is a familiar one to many of us - if I change some source code I want to be able to identify which of the binaries will change and, therefore, what it is appropriate to retest.
A simple approach would be simply to compare file sizes. That is an 80% acceptable solution, but there is at least a theoretical possibility of missing something. Secondly, it gives me very little indication as to WHAT has changed; It would be ideal to get some form of report on this so I can then filter out irrelevant (e.g. dates/versions copyrights etc..)
On the plus side :
all my .dcus are in a row - I mean they are all built into a single folder
the build is controlled by a script (.bat)(easy, for example, to emit .obj files if that helps)
svn makes it easy to collect together any (two) revisions for comparison
On the minus side
There is no policy to include all used units in all projects; some units get included because they are on a search path.
Just knowing that a changed unit is used/compiled by a project is not sufficient proof that the binary is affected.
Before I begin writing some code to solve the problem I would like to ask the panel what suggestions they might have as to how to approach this.
The rules of StackOverflow forbid me to ask for recommended software, but if anyone has any positive experiences of continuous integration tools that would help - great
I am open to any suggestion or observation that is relevant in this context.
It seems to me that your question boils down to knowing which units are contained in your various executables. Since you are using search paths, it will be hard for you to work this out ahead of time. The most robust way to find out is to consult the .map file that the compiler emits. This contains a list of all units contained in your executable.
Once you know which units are contained in each executable, you need to know whether or not anything has changed in those units. That information is contained in your revision control system. Put this all together and you have the information that you need.
Of course, just because the source code for a unit has changed, you might argue that re-testing is not needed. Perhaps the only change made was the version, or the date in a copyright label or some such. But it is asking too much to be able to ask a computer to make such a judgement. At some point you need a human to step up and take responsibility.
What is odd about this though is that you are asking the question at all. It seems to me to be enormously risky to attempt partial testing. I cannot understand why you don't simply retest the entire product.
After using it for > 10 years for commercial in-house and freelancer work in large projects, I can recommend to try Apache Ant. It is a build tool which supports dependencies, and has many very helpful features.
Apache Ant also integrates nicely with CI tools such as Hudson/Jenkins, Bamboo etc.
Another suggestion - based on experience with Maven - is to design the general software architecture as modular as possible. If modules (single or multiple source or DCU files in one directory) use a version number in the directory name as a version number, it is possible to control exactly how application are composed from these modules.
If you want to program such a tool yourself the approach would be something like this:
First you need to detect wheter there were any changes made to seperate source files. As you already figured out comparing the file size is bad idea as the file size can stay the same despite lots of changes made to it (as long as there is same amount of text in pas file its size won't change). So instead you could check the last modification time for specific file or create some hash value like MD5 hash for comparison (can be quite slow).
Then you need to generate yourself a dependancy tree which will tell you which files are used for which project/subproject.
Finally based on changes detected in seperate files you check the dependancy tree to see which projects needs to be recompiled.
The problem of such approach is that you would probably have to update the dependancy tree manually each time when new unit is added to the project or an existing one is removed from the project.
But the best way would be to go and use some version controll software istead of reinventing the wheel. I myself like the way how GIT works and I belive that with proper implementation of GIT into the project mannager itself could be quite powerfull do to GIT support of branching/subbranching (each project is its own branch, each version of your software can be its own subbranch).
Now latest version of Delphi does have GIT integration done though SVN but this unfortunately limits some of best GIT functionality. So if you maybe decide to go and integrate GIT support directly into Delphi I'm first in line to use it.
(This might be better in the TestComplete forums, but I thought I'd give it a shot here anyway)
We are looking in to automated testing of our Delphi 2010 application with TestComplete. The main control that our application uses is our own custom control that derives directly from TCustomControl.
(For reference the control is like a digraming tool that display boxes with text in them. These boxes can be selected. the control is completely custom drawn, including the selection).
We are looking in to making this more TestComplete friendly so we can read data out of it (e.g. what data is loaded in to the control, what data is selected)
I should also mention that our application uses an MVC architecture and makes heavy use of interfaces. TestCompletes debug agent can't seem to return any type information about interfaces and thus we can't get any data from them. I suspect this is the root of our problem
I'm considering these two approaches:
Add new properties to the control that will return information about the currently selected box(es). e.g. text in the box, position on screen, hierarchical path, and access them via TestCompletes debug agent.
Look at creating a custom control add on for TestComplete (I'm not even sure you can do this with Delphi controls)
The problem with the first approach is the linker will often elimate properties and functions if they are not being used. We want to use our release build for testing not a debug build.
Does anyone have any advice on this or experience with this type of thing?
Thanks
Edit: I've just read the SDK help and custom control addons can only be created for .net and WPF controls.
You are right about the debug info - you can strip it out from the release build. Thus, you will test a release build and have access to internals at the same time.
A note regarding this situation: "the linker will often elimate properties and functions if they are not being used." You can cheat here to make the linker generate debug info for those funcitons:
Make the funcitons published. The linker does not touch published elements.
Make the funcitons virtual. The linker does not exclude virtual methods.
Call your functions somewhere in your code. To include the call in your code without actually calling anything, you can do something like this:
var t: Boolean;
begin
t := False;
if t = True then
TheFunctionThatNeverExecutes();
...
end;
You should reconsider your decision to use the release build for testing. The reason is that TestComplete needs some magic to make your testing life easier while you don't want this magic be present in the release build. So if you can elaborate on the reasons for not using a debug build for testing, we can try to find a solution to revoke this decision. The result may be that you will have access to all the relevant data of your control if you only reveal all available power of TestComplete.
Now back to the original question: You can overcome the interfaces problem by creating some special classes that wrap those interfaces and thus make the properties available in TestComplete.
Create a small (perhaps invisible) test form where you centralize access to instances of these classes. (Now the release mode link) Only create this form in debug mode so, when carefully designed, you only link in the relevant code when needed for testing.
I'm using Delphi 2007. Sometimes properties linking to components get lost. This is typically Action properties and lookupdatasets. I have a few times had some emergency bug fixes and sent out a version to customers with a somewhat catastrophic result due to this :-)
Anyone know a way to verify that properties that are supposed to be set really are set, or a way to prevent this from happening?
You can assign such values in code, obviously.
More importantly though, you have to diff every file before committing to sourcecontrol. Always.
Make sure your dfm-files is text, not binary. Then it will be easy to see unwanted changes before check-in/commit.
Diffing everything have stopped a lot of potential blunders for me.
An automatic build and test-system would also give you some confidence in what you deliver.
You've received several good answers about how to detect when this does happen (up voted). But one way to prevent it from happening (sometimes) is to make sure that you have added all of the referenced units to your DPR. If you open a form, for example, which contains components that reference other components on a data module, and that data module has not been added to the DPR/project, you're almost guaranteed to have the IDE remove those references, because it removes references which it cannot determine are valid. If, on the other hand, the data module is in the DPR, then the IDE will be able to find it, and it is less likely to remove the references in the first place
Unfortunately, it still happens from time to time, so you still need to take the precautionary measures detailed in the other answers. But this will make things better, if you don't already do this.
Create dunit test project.
Run test before release.
Sound all bells when test fails.
You want a way to verify if the values are set correctly. Well, you can use unit tests for this. Just initiate a form, compare the properties and done.
Comparing the dfm's is also a good way ofcourse but it does not take into account changes due to changed defaults or changes in the code.
When you add a form, data module or frame to a project, the IDE inserts a little comment "tag" after the unit name in the dpr file. It has been my experience that if for any reason this tag is not present, the IDE is more prone to losing cross-module component references.
I wholeheartedly support the idea of always viewing differences before each commit to version control, if you are using such as thing.
I hate to say this but Source Code Control can help in these situations. Even with an emergency bug fix you should be checking everything into a source code repository (Perforce is my personal favorite). In a small fix you could see by what files have changed whether you have any changes you are not expecting.