How to control start level of bundles deployed from pickup directory? - equinox

Is it possible to control the start level of bundles dropped in the pickup folder? Is there a way for a bundle to ask that it not be activated until a certain other service becomes available?
We have many bundles with interdependencies managed entirely by start levels; i.e. if Bundle Y needs Bundle X at its start-up then Y has a higher start-level.
This does not work well if a bundle is dropped in the pickup directory. It seems these bundles start before any other and in arbitrary order among themselves, perhaps concurrently.
Is there a way to control the start-level of bundles in pickup folder?
Even better, is there a way for a bundle to declare its dependencies on other services? In that case I can even use that with multiple bundles in pickup with one dependent on another.

You can only restrict the resolution of bundles to the availability of capabilities, but not the activation. Activation certainly happens after resolution, but resolved bundles may be activated in any order. So bundle-level activation dependencies don't work well.
Instead, you should use "components" (i.e. declarative services) which have the option to bind their lifecycle to the availability of other components. With this option, the bundles may start in an arbitrary order, but services are still only activated/exposed once all its service dependencies are activated.

Related

TFS server workspace with detection of local changes

I would like to leverage the benefits of a server workspace (seeing who has checked out which file) together with the ability of the Team Explorer - Pending Changes to detect local changes (which with my current configuration works only when using a local workspace).
Is there a possibility to configure such behavior? I do not understand which technical limitation makes my server workspace incapable of sensing that I added, removed or changed files without checking them out first. It should at least be capable of showing these and then prompting me to check them out before I can include them for commit.
Is there a possibility to configure such behavior?
Nothing is built in. If you take locks on the server, you need to explicitly notify the server, but that would mean having something running all the time to check for file changes and see if a lock could be taken (and how would arbitrary tools handle that failing?)
You could create something yourself to do this (the TFS-VC API is available).
Meanwhile most developers find the local model works better (don't require exclusive access, in the cases where there is a conflict it is resolved at checkin).

Managing multiple versions of an iOS App

Lets say I have an iOS App for let's say, Football news, now I want to create an other version for Basketball news that will be based mostly on the Football App but with a freedom to create a different behaviour in some aspects of each app + adding more apps in the future for other news subjects.
An other condition is that they will have a separate CoreData model, assets, icon etc.
As I understand I have few options:
Manage the apps separately, place them in the same directory and point to the shared files in the first (Football app).
Create a different target for each app in the same project
Create a Workspace with one project that will hold the common code and a project for each project.
What are the pros / cons for each option and what are the best practices in this situation ?
Just to clarify - the apps I mention are an example, the App is not for news, and it must be a different app for each concept.
Thanks
I work in an enterprise environment, and we have a mobile app that's a product of the company I work for. We sell licenses of that software to our costumers, which are always huge companies. Our app doesn't go through the App Store.
Each of our clients have some sort of customization on the app, either by simply changing their logos or even adding some specific features for one of them. What I mean by this is: we have to deal everyday with a situation very close to what you are describing, and here's my two cents.
In advance: sorry if I'm too honest sometimes, I don't mean to offend anyone.
1. Manage the apps separately, place them in the same directory and point to the shared files in the first (Football app).
Well... That's a weird solution, but it sure could work. It might be hard to maintain locally and even harder when using SVN/Git (specially when working on a team).
I had some issues before related to symbolic links before, but I'm not sure if that's what you are referring to in this option. If you explain a little bit better, I can edit this and try to give you a better opinion.
2. Create a different target for each app in the same project
That's a better start, in my opinion.
We use this approach mostly to handle various possible backend servers. For example, one of our targets uses our development backend server, while another target uses the production server. This helps us ensure that we can use the development-targetted app without risking serious costs to our team (due to a mistakenly placed order, for instance).
In your case, you could for example configure preprocessor macros on the targets to enable/disable some target-specific feature that's called by code. You could also use different storyboards for each target.
The downside of this option is that the code will be messy, because every piece of code will be on the same project. This is the main reason why I'd go with option #3.
3. Create a Workspace with one project that will hold the common code and a project for each project.
Again, I'd go for this. To be honest, we're not using this at our company YET, but that's due to internal reasons. I'm trying to get this going for our projects as soon as possible.
I wouldn't call it easy to set up, but if done properly it can help you save some time because of maintenance reasons. You'll be able to reuse any code that's possible to reuse, and still be able to keep your target-specific images, classes and views to their own "container"(project).
This way you'll get a default project (the app itself), multiple targets for it, and a "framework" to keep the code for each one of the targets. In other words, you'll be able to share code between the multiple targets/apps, and at the same time you'll be able to separate what belongs to each one of them. No messy project :)
I'm not sure about how CoreData is compiled by Xcode, as we're not using it. But check out the answer I just did for another question. It's not Swift, but that shouldn't make much difference as almost all of the answer is about configuring the workspace to achieve this solution. Unfortunately I think it too big, that's the reason why I'm linking the answer instead of pasting it here.
If you need any help setting that up, let me know and I'll do my best to help you.
This may be overkill for you, but this solution is scalable. We had to build ~15 apps from one codebase
The problem we had to solve was branding. Application design and flow was basically the same, along with the structure of the data we received.
A lot of the heavy lifting was done by our CI server.
We had a core application with all of the UI and some common business logic. this was known as the White-app.
We then had a specific project (frameworks didn't exist then) for each of the different endpoints & data models and mappers into the White-app's view models. Those applications were private pods and managed by cocoa pods.
Our CI was configured in a way that it would compile all 'Branded' app's by copying, compiling, signing all the varying plist, assets, string files into each application along with each of the specific data models for each application. So when a end-to-end build was triggered, it would build all the different branded apps.
The advantage of this is the target layout within Xcode is not cluttered, we had a release, test and development target which applied to each application built. This meant our project was succinct with no risk of accidentally editing a branded apps build settings.
This solution will also provide you with an .xcworkspace (mostly utilised by cocoa pods) which contains reference to the the different model pod's
This solution because it is work to setup i.e when building in Xcode we created a special scheme which installed a pod and copied in all the correct assets (as CI would)
This is a question that many developers were thinking about many times, and they came up with different solutions specific to their needs. Here's my thoughts on this.
Putting the common parts, which you could see as the core, into something separate is a good thing. Besides supporting reusability, it often improves code quality by the clear separation and clean interfaces. From my experience, this makes testing also easier. How you package this is determined by what you put in there. A static library is a pretty good start for core business logic, but lacks support for Swift, and resources are painful to include. Frameworks are great, but raise the bar on the minimum iOS development target. Of course, if you're just using very few files, just adding the folder to your app projects might work as well - keeping the project structure up to date can be automated (the dropbox/djinni thing does this), but it's a non-trivial approach.
Then there are the actual products to build, which must include the core module, and the individual parts. This could be a project with several targets, or a workspace with several projects, or a mix of both. In the given context, I make my decision based on how close the apps relate. If one is just a minor change from the other, like changing a sports team, or configuring some features out as in light vs. pro, this would be different targets in the same project. On the other hand, I'd use different projects (maybe arranged within a common workspace) if the apps are clearly different, like a Facebook client and a Twitter client, a board game app for offline play and an online gaming app etc.
Of course, there are many more things to consider. For example, if you build your app for clients and ship the sources, separate projects are probably needed
.
It's better to create a framework that will contain the most shared code you need in all 3 options. Also, the first option is bad in any case. For better control it is better to have 2 or 3 option. The workspace is more preferable, imho, since it will not harm to other sub-projects if you, for example, will decide to use cocoapods. The workspace also allows you to have a different set of localizations in each project. Plus, only targets that related to a specific project will appear in targets list, which is better than a bunch of target in one pile (if you have, for example, a share extension in all products - it will be frustrating to find one you need). What you choose depends on your needs, but both second and third options are good enough.
I think that the best way to do that is something that encloses all the 3.
First I would create a configurable framework, that shares with all targets everything that they have in common, from UI (elements such as custom alerts etc) to business logic.
Then I will create different bundles or folders for each target checking the membership target (in these way you guarantee only to import the exact resources), then using preprocessor macro you can create a path builder specific to the right bundle or directory where your resources reside.
During the years I've collected some interesting links about best practice.
Here they are:
Use asset catalog with multiple targets
Use multiple tagets XCode 6
XCode groups vs Folders
Create libraries with resources
Create lite and pro version of an app
I know that in SWIFT they made some changes about preprocessor macros, so some article are still valid but little bit outdated.
We all face this kind of situation. But here are the things I do and maybe you can pick something here that can help you. (I hope).
have a project that contains the core features
have modular projects that can be used by other variants of the product
manage the project under version control or git flow that will help keep the main source / project under the main branch accessible through branches / features
create new branch / feature for the project variant if necessary or just enable / disable or use project modules needed for that variant (whatever is most appropriate on the current setup).
if the app has a web service that it connects to, provide a licensing stage where the mobile app will do it's first ever request to a common (to all variants or even all mobile apps) web service URL. This web service interprets the request and respond with the given information to what the app's settings will be (e.g. web service to connect to for the given license number, modules to be enabled, client's logo, etc).
the main projects and modules created can be converted to frameworks, libraries or even bundles for resources & assets depending on the level or frequency of changes done to these items. If these items are constantly changing or updated by others, then don't compress it; have a workspace with targets that link the whole project / module to the current project variant so that the changes to these modules reflect immediately (with consideration of version control of course).

Does CloudFoundry provide Application-as-a-Service?

I'm looking for a cloud-based (pub, priv or hybrid) solution that allows me to configure every detail about the platform (container, system stack, virtualized hardware, etc.) for my app, but also deploys a templated version of my app on all app server nodes as soon as I run my 1st build. Hence I configure the app/platform, click a button, and boom: I have a WAR deployed and running across a cluster of nodes. (Granted, since I have not written any code at this point, this deployed WAR would have de minimis code inside of it and would consistitute the bare minimal code required to produce a WAR. In the case of Grails, it might just be the code that is generated by running grails create-app myapp.)
I'm calling this "Application-as-a-Service", because it not only is a traditional PaaS, but also goes one step further and deploys packaged WARs using some kind of templated source code repo.
My question: CloudFoundry says they support multiple frameworks (Spring, Grails, etc.); does this mean it can do what I describe above? If not, what does CloudFoundry mean when they say that they "support the Grails framework"?
Using CF you are able to configure platform/OS. Currently used container is warden. Virtualised hardware depends on IaaS used for(under) CF. Then you may 'click a button' and your app will be deployed and running across a cluster of nodes (DEA instances).
CloudFoundry says they support multiple frameworks (Spring, Grails, etc.); does this mean it can do what I describe above?
I don't fully follow what you're trying to do above, but I can tell you about the general work flows for CloudFoundry.
As an administrator, you use Bosh to deploy CloudFoundry to your IaaS. You can control anything you want at the IaaS level (assuming you're an administrator for your IaaS), so long as you meed the requirements of CF like storage, memory and CPU. In addition, you can control the CF deployment by adjusting the various configuration settings (YAML files) for CF. This allows you to tune the amount of resources (memory, CPU, disk space, etc..) that are available for application developers.
As a developer, you take your application and push it to a running CF installation. You may or may not be the administrator of that CF installation, if you are not then you'll be subject to the policies of the administrator.
The push process takes your application (ruby, python, php, go, java, grails, etc...) and uploads it to CF. From there, the application files are turned into a droplet by the build pack. This process is easier than it sounds, the build pack is just adding all the things that are necessary to run your app, like a web server or app container. With the droplet created, CF will run it or possibly even multiple instances of it if you so desire.
A couple further notes / clarifications:
How much memory your application gets is up to the app developer and can be adjusted at the time an app is push or later using cf scale.
CPUs are shared across all apps. You cannot reserve or guarantee usage per app. Application usage is limited though, so one app cannot steal all of the available CPUs.
The amount of disk storage per app is set by the administrator.
Disk storage for applications is ephemeral and is cleared when an application is restarted. Persistent storage should be provided through a service.
As you can see, CF administrators have a good degree of control over the system. Application developers not so much, but that is in fact the point of PaaS. Application developers don't want to waste time playing sys-admin. They just want to run their apps.
If not, what does CloudFoundry mean when they say that they "support the Grails framework"?
What is meant by this is that you can take a WAR file produced by Grails and run it on CloudFoundry without any additional work.
The CloudFoundry system uses build packs to handle the process of taking your application (i.e. your Grails WAR file) and preparing it to run. As I mentioned above, this usually involves installing and configuring some sort of server. In the case of Java / Grails, it involves setting up Apache Tomcat and configuring it to run your application (Note, if you don't like the way it's configured by default, you can customize or create your own build pack to configure it exactly the way you like).
CloudFoundry supports Grails and other JVM based languages because it can take those applications and automatically install, configure and run them.

Using WiX to generate an installer for an ASP.Net MVC website

Has anyone used WiX to generate an installer for an ASP.Net MVC website? Do you harvest files from the web project? I can’t find any good examples of this being done. There doesn’t seem to be a documented way to include all the right files, only the right files and put them in the right place.
If you add the website project as a reference in the installer project, and set harvest=True in the properties, then all the website files are captured, but there are issues:
Some files that should not be copied are included, e.g. packages.config, Web.Debug.config There doesn’t seem to be any clear or simple way to exclude them (as per this discussion).
The .website dll file is in the wrong place, in the root rather than the bin folder (as per this discussion)
However if you do not use harvesting, you have a lot of files to reference manually (e.g. Under \Content\ alone I have 58 files in 5 folders. Most of that is jQuery UI) and they change from time to time, and errors and omissions could easily be missed from a WiX file list. So it really should be kept in sync automatically.
I disagree with the idea that the list of files should be specified explicitly in WiX and not generated dynamically (which is what seems to be suggested at the first link, the wording isn't very clear). If I need to remove a file I will remove if from the source control system, there is no need to do the extra work of maintaining two parallel but different catalogues – one set of files in source control, and the same files listed in WiX. there should be one version of the truth. All files in the website's source tree (with certain known exceptions that are not used at runtime e.g. packages.config) should be included in the deployment.
For corporate reasons I don't have much choice about using WiX for this project
In our MVC 3 project we use Paraffin to harvest files for the installer. For example, you can use "-ext " to ignore the files with extension , use "regExExclude " to ignore the file name matching the regular expression, etc.
Paraffin also keeps the proper structure, all your files would be in the correct folder as they appear in your project.
I use a program that I wrote called ISWIX that makes authoring wxs merge modules a simple drag and drop operation like InstallShield. I then consume that merge module in an installer that handles the UI and IIS configuration.
I also have postbuild automation that extracts the content of the MSI and compares it against what the project published. If there is a delta I fail the build and you have to either a) add it to the wxs or b) remove it from the publish.
I find that the file count churn from build to build is minimal and that this system is not difficult to maintain. The upside is everything remains 100% intentionally authored and files don't ever magically add or remove from the installer unless you intended them to. Dynamic installer generation isn't worth the risk and most people who argue that it is don't even know what those risks are.

How to Sandbox Ant Builds within Hudson

I am evaluating the Hudson build system for use as a centralized, "sterile" build environment for a large company with very distributed development (from both a geographical and managerial perspective). One goal is to ensure that builds are only a function of the contents of a source control tree and a build script (also part of that tree). This way, we can be certain that the code placed into a production environment actually originated from our source control system.
Hudson seems to provide an ant script with the full set of rights assigned to the user invoking the Hudson server itself. Because we want to allow individual development groups to modify their build scripts without administrator intervention, we would like a way to sandbox the build process to (1) limit the potential harm caused by an errant build script, and (2) avoid all the games one might play to insert malicious code into a build.
Here's what I think I want (at least for Ant, we aren't using Maven/Ivy right now):
The Ant build script only has access to its workspace directory
It can only read from the source tree (so that svn updates can be trusted and no other code is inserted).
It could perhaps be allowed read access to certain directories (Ant distribution, JDK, etc.) that are required for the build classpath.
I can think of three ways to implement this:
Write an ant wrapper that uses the Java security model to constrain access
Create a user for each build and assign the rights described above. Launch builds in this user space.
(Updated) Use Linux "Jails" to avoid the burden of creating a new user account for each build process. I know little about these though, but we will be running our builds on a Linux box with a recent RedHatEL distro.
Am I thinking about this problem correctly? What have other people done?
Update: This guy considered the chroot jail idea:
https://www.thebedells.org/blog/2008/02/29/l33t-iphone-c0d1ng-ski1lz
Update 2: Trust is an interesting word. Do we think that any developers might attempt anything malicious? Nope. However, I'd bet that, with 30 projects building over the course of a year with developer-updated build scripts, there will be several instances of (1) accidental clobbering of filesystem areas outside of the project workspace, and (2) build corruptions that take a lot of time to figure out. Do we trust all our developers to not mess up? Nope. I don't trust myself to that level, that's for sure.
With respect to malicious code insertion, the real goal is to be able to eliminate the possibility from consideration if someone thinks that such a thing might have happened.
Also, with controls in place, developers can modify their own build scripts and test them without fear of catastrophe. This will lead to more build "innovation" and higher levels of quality enforced by the build process (unit test execution, etc.)
This may not be something you can change, but if you can't trust the developers then you have a larger problem then what they can or can not do to your build machine.
You could go about this a different way, if you can't trust what is going to be run, you may need a dedicated person(s) to act as build master to verify not only changes to your SCM, but also execute the builds.
Then you have a clear path of responsibilty for builds to not be modified after the build and to only come from that build system.
Another option is to firewall off outbound requests from the build machine to only allow certain resources like your SCM server, and your other operational network resources like e-mail, os updates etc.
This would prevent people from making requests in Ant to off the build system for resources not in source control.
When using Hudson you can setup a Master/Slave configuration and then not allow builds to be performed on the Master. If you configure the Slaves to be in a virtual machine, that can be easily snapshotted and restored, then you don't have to worry about a person messing up the build environment. If you apply a firewall to these Slaves, then it should solve your isolation needs.
I suggest you have 1 Hudson master instance, which is an entry point for everyone to see/configure/build the projects. Then you can set up multiple Hudson slaves, which might very well be virtual machines or (not 100% sure if this is possible) simply unprivileged users on the same machine.
Once you have this set up, you can tie builds to specific nodes, which are not allowed - either by virtual machine boundaries or by Linux filesystem permissions - to modify other workspaces.
How many projects will Hudson be building? Perhaps one Hudson instance would be too big, given the security concerns you are expressing. Have you considered distributing the Hudson instances out - one per team. This avoids the permission issue entirely.

Resources