Azure IoT Edge - Module deployment/update - azure-iot-edge

If I have an Edge Device with several modules, I can then update one module without affecting the others, even if all modules are deployed with the same deployment manifest.

If you don't update the module image URI, environment settings or createOptions of a module, it will keep running. This means that if you only update these options of one of your modules, only that one will be restarted, the rest will remain active.

Related

Spring Cloud Data Flow - Task Properties

I'm using SCDF and i was wondering if there was any way to configure default properties for one application?
I got a task application registered in SCDF and this application gets some JDBC properties to access business database :
app.foo.export.datasource.url=jdbc:db2://blablabla
app.foo.export.datasource.username=testuser
app.foo.export.datasource.password=**************
app.foo.export.datasource.driverClassName=com.ibm.db2.jcc.DB2Driver
Do i really need to put this prop in a property file like this : (it's bit weird to define them during the launch)
task launch fooTask --propertiesFile aaa.properties
Also, we cannot use the rest API, credentials would appear in the url.
Or is there another way/place to define default business props for an application ? These props will be only used by this task.
The purpose is to have one place where OPS team can configure url and credentials without playing with the launch command.
Thank you.
Yeah, SCDF feels a bit weird in the configuration area.
As you wrote, you can register an application and create tasks, but all the configuration is passed at the first launch of the task. Speaking other way round, you can't fully install/configure a task without running it.
As soon as a task has run once, you can relaunch it without any configuration and it uses the configuration from before. The whole config is saved in the SCDF database.
However, if you try to overwrite an existing configuration property with a new value, SCDF seems to ignore the new value and continue to use the old one. No idea if this is intended by design or a bug or if we are doing something wrong.
Because we run SCDF tasks on Kubernetes and we are used to configure all infrastructure in YAML files, the best option we found was to write our own Operator for SCDF.
This operator works against the REST interface of SCDF and also compensates the weird configuration issues mentioned above.
For example the overwrite issue is solved by first deleting the configuration and recreate it with the new values.
With this operator we have reached what you are looking for: all our SCDF configuration is in a git repository and all changes are done through merge requests. Thanks to CI/CD, on the next launch, the new configuration is used.
However, a Kubernetes operator should be part of the product. Without it, SCDF on Kubernetes feels quite "alien".

Openshift zero downtime deployment react + rest api

We have a Web interface built using React (and nginx) and a Rest API (with json schema validation). They are in different repositories. Our cluster is a private openshift (3.11)
We would like to achieve a zero downtime deployment.
Let's assume that:
we have 10 pods for the Web and 20 pods for the Rest API.
we want to upgrade WEB and API from 1.0.0 to 2.0.0
the new version of WEB supports only the new version of the API
each repo (WEB and API) has its own helm chart (if needed and it is recommended, we could create an additional repository containing a single helm chart that deploys both web and api)
Which deployment strategy should we use? (blue/green, canary, a/b ?)
How can we configure the new WEB pods in order to hit the only new service of the API:
WEB 1.0.0 --> API 1.0.0
WEB 2.0.0 --> API 2.0.0
How can we perform the upgrade with zero downtime?
The very important thing is that, during the upgrade, the new version of the WEB should hit only the new version of the API, while the already deployed pods (1.0.0) should continue to hit the old version of the API.
I have done the same, and within Kubernetes, you can achieve this. Let's follow the below approach.
If you look above, I am doing my deployment via helm, and all the K8s objects (Pods, SVC, ingress) are unique based on release names. By this, I can access my specific front-end release by adding a context after my domain like https://app.com/1.0 or https://app.com/2.0.
The version which I want to expose to the internet, I am controlling it via Separate Ingress object (You can call super-ingress), which is independent of your releases and decide which version you want to keep live. By this, you can deploy N number of releases in production without any conflict, and by super-ingress, you can choose, which svc you want to point to the public.
Given the constraints you're telling us, your only choice is to follow a blue/green approach.
You have a pack of stuff which work together, let's say A. And another pack which work together, B. AB is something not possible, so this rule out canary or a/b testing.
You need to deploy B (green), and when everything is correct, switch the domain from A to B.
In kubernetes' words, you will have two different Deployments and Services, like if both are standalone applications. When you are confident the v2 is working properly, you need to change the DNS record pointing to the LoadBalancer of the v1's Service, to point to the v2's Service

How to set up liferay for team development and deployment?

I am looking into how to set up a liferay project with version control and automated deployment. I have a working local development environment in eclipse, but as far as I understand it, setting up a portal in liferay is in part the liferay portal instance running on tomcat and then my custom module projects for customization. I basically want all of that in one git repository which can then be
1: cloned by any developer to set up their local dev environment
2: built and deployed by eg. jenkins into eg. AWS
I have looked at the liferay documentation regarding creating a docker container for the portal, but I don't fully understand how things like portal content would be handled.
I would be very grateful if someone could lead me in the right direction on how a environment like this would be set up.
Code and content are different beasts. Set up a local Liferay instance for every single developer. Share/version the code through whatever version control (you mention git).
This way, every developer can work on their own project, set breakpoints, and create content that doesn't interfere with other developers.
Set up a separate integration test environment, that gets its code exclusively through your CI server, never gets touched manually.
Your production (or preproduction) database will likely have completely different content: Where a developer is quick to create a few "Lorem Ipsum" posts and pages, you don't want them to escape into production. Thus there's no movement of content from development to production. Only code moves that way.
In case you want your developers to work on a production-like environment, you can restore the production content (database) to development machines. Note that this is risky though: The database also contains user accounts, and you might trigger update notification mails from your development machines - something that you want to avoid at all costs. Plus, this way you give developers access to login data (even though it's hashed) which can be abused. And it might even be explicitly forbidden by industry regulations to use production data in development environments.
In general: Every system has its own database (at least their own schema), document store and indexing server. Every developer has their own portal JVM running. The other environments (integration test, load test, authoring, production) are also separate environments. And no, you don't need all of them all the time.
I can't attribute this quote (Milen can - see his comment), but it holds here:
Everybody has a testing environment. Some are lucky to run a completely different production environment.
Be the lucky one. If everyone has their own fully separated environment, nobody is stepping on each other's shoes. And you'll need the integration tests (with the CI output) anyway.

How do I configure two sets of hosts (3 for QA and 3 for Prod) for deploying a distributed system using Spinnaker?

I am using Spinnaker to deploy a 3-tier system to QA and then to Production. Configuration files in each of these systems point to others. If I bake in the configuration for QA in the AMI, then how do I change it while promoting to Prod? Is it 1) by having two different sets of AMIs - one for QA and one for Prod, or, 2) by having the AMIs with no configuration and then configure it (somehow) after deployment to change the configuration files?
What is recommended?
You can define custom AWS user data for cluster at deploy time ( under advanced settings of the cluster configuration ). You can then retrieve this user data in your application. This will allow you to change these type of configurations.
At Netflix, we have a series of init scripts that are baked into the base image and provide a mechanism for extending custom startup ( init.d ) scripts via nebula / gradle. This usually sets values like NETFLIX_ENVIRONMENT that are well known and programmed against.
We also use a feature flipping mechanism via https://github.com/Netflix/archaius . This allows us to add properties that are external to the clusters but can be targeted towards them.
When it comes to secured credentials, the approach is outlined in this presentation, but essentially the images reach out to an external service that issues these type of creds. https://speakerdeck.com/bdpayne/key-management-in-aws-how-netflix-secures-sensitive-data-without-its-own-data-center
I am struggling with similar problems myself in our company.
My solution was to create AMIs for specific purposes using a Packer script. This allows me to -
1. Configure the server as much as I can and then store those configurations in an AMI.
2. Easily change these configurations if the need arises.
Then, launching the AMI using an Ansible script, and make all the rest of the configurations on the specific instance.
In my case I chose creating different images for staging and production, but mostly because they differ greatly. If they were more alike I might have chosen using a single AMI for both.
The advantage Ansible gives you here is factoring your configurations, and including written once to both production and staging servers.

Grails: Two Sites, One Codebase

I have a sizable Grails application that has a small handful of controllers and views, and a very large number of Services.
I now need to build a "reskinned" version of the same site which has its own set of views and controllers (there are some smallish differences in how the controllers work), but the exact same set of Services. One option is to move all of the services into some kind of common place via custom Grails plugins, but I'm wondering if there's another way.
What I'd like to do is have two packages in the controllers folder, com.company.sitea and com.company.siteb, with an environment variable that effectively chooses which one of those is used. Similarly, two different sets of views, each one selected based on this environment variable, but with a shared taglib.
I was unable to find anything obvious that did this, is there a plugin or a standard way of doing this (or something similar)? The idea is, I'd have one codebase, I'd build one war, but the war would be deployed in two different places, and each one of those would specify a value for a special environment variable.
We have been successfully using a pattern where we make the first application a plugin. It will be run as an application for the first use case and included as a plugin to the second use case.
If you do this (a grails application is a application and plugin at the same time), you have to exclude the plugin from starting up when it's run as an application. Otherwise the application will start it self twice: as an application and as a plugin. The "grails.plugin.excludes" configuration setting (explained here) prevents this from happening.
I have documented this special use case in this jira issue:
http://jira.grails.org/browse/GRAILS-6751
"Allow a Grails application to be used as a Grails plugin besides using it as an application"
This has turned out to be a killer feature for us in many cases. We have been able to reuse applications as plugins in other use cases and combine several separate applications to one with this feature. Of course there are some restrictions (like name conflicts) but that hasn't been a problem for us.
The most common use case is to reuse a full application and just override the views in another one. Since an existing application can be an application and plugin at the same time, there is no extra work in extracting "common parts" to a separate plugin.
To make an existing Grails application a plugin you just have to add a MyAppNameGrailsPlugin.groovy file to the root directory and add this configuration value to grails-app/conf/Config.groovy:
grails.plugin.excludes = ['myAppName'] (application name in camel case starting with lower case letter)
Mailing list discussion:
http://grails.1312388.n4.nabble.com/Dynamic-applications-extending-an-existing-application-with-Grails-tp4634094p4634202.html
Burt's detailed blog post:
http://burtbeckwith.com/blog/?p=1973
David Dawson's presentation:
http://skillsmatter.com/podcast/home/large-scale-grails-using-plugins-how-to-modularise-an-application/jd-8012
one important factor is whether both of these versions should operate on the same data? so they will be different front-ends over same database? or they will be completely separate, for example per-customer deployment?
if the first, I would go webservices way, so one app would contain business logic, and controllers working as webservices, and other app would just have different controller, "talking" to the first one
you could also combine everything in one project (both versions) and limit access to different controllers from different domains with apache unmounts
if the latter, I would also try to extract common part into a plugin

Resources