Publishing two IoT Edge modules to one IoT Edge device on Staging and Production environments via one Azure pipeline and one Release - azure-iot-edge

I need to publish two IoT Edge modules to one IoT Edge device on Staging and Production environments via one Azure pipeline and one Release.
I parametrized the Azure container registry's address, username, and password via variables (Build variables and JSON variables are used) and the solution works for one environment. But, how can I do the same for both environments via one Azure pipeline and one Release?
deployment.template.json
module.json

Related

Azure DevOps CI with Web Apps for Containers

I'm struggling to set up a CI process for a web application in Azure. I'm used to deploying built code directly into Web Apps in Azure but decided to use docker this time.
In the build pipeline, I build the docker images and push them to an Azure Container Registry, tagged with the latest build number. In the release pipeline (which has DEV, TEST and PROD), I need to deploy those images to the Web Apps of each environment. There are 2 relevant tasks available in Azure releases: "Azure App Service deploy" and "Azure Web App for Containers". Neither of these allow the image source for the Web App to be set to Azure Conntainer Registry. Instead they take custom registry/repository names and set the image source in the Web App to Private Registry, which then requires login and password. I'm also deploying all Azure resources using ARM templates so I don't like the idea of configuring credentials when the 2 resources (the Registry and the Web App) are integrated already. Ideally, I would be able to set the Web App to use the repository and tag in Azure Container Registry that I specify in the release. I even tried to manually configure the Web Apps first with specific repositories and tags, and then tried to change the tags used by the Web Apps with the release (with the tasks I mentioned) but it didn't work. The tags stay the same.
Another option I considered was to configure all Web Apps to specific and permanent repositories and tags (e.g. "dev-latest") from the start (which doesn't fit well with ARM deployments since the containers need to exist in the Registry before the Web Apps can be configured so my infrastructure automation is incomplete), enable "Continuous Deployment" in the Web Apps and then tag the latest pushed repositories accordingly in the release so they would be picked up by Web Apps. I could not find a reasoble way to add tags to existing repositories in the Registry.
What is Azure best practice for CI with containerised web apps? How do people actually build their containers and then deploy them to each environment?
Just set up a CI pipeline for building an image and pushing it to a container registry.
You could then use both Azure App Service deploy and Azure Web App for Containers task to handle the deploy.
The Azure WebApp Container task similar to other built-in Azure tasks, requires an Azure service connection as an input. The Azure service connection stores the credentials to connect from Azure Pipelines or Azure DevOps Server to Azure.
I'm also deploying all Azure resources using ARM templates so I don't like the idea of configuring credentials when the 2 resources (the Registry and the Web App)
You could also be able to Deploy Azure Web App for Containers with ARM and Azure DevOps.
How do people actually build their containers and then deploy them to each environment?
Kindly take a look at below blogs and official doc which may be helpful:
Deploy an Azure Web App Container(official)
Azure DevOps: Create a Web App for Containers CI/Release pipeline for
an ASP.NET Core app
Build & release a Container Image from Azure DevOps to Azure Web App
for Containers

Automating EC2 Instances register/deregister from ELB during Jenkins job build

I am using Jenkins to build the binaries to be deployed on my production server. The source code is being managed in SVN and in Jenkins, I am using the parameterized plugin to allow the team members to select the tags they want to deploy.
Currently, the production setup is that multiple instances running behind the ELB. So in order to deploy the build, what I need to do that take out (deregister) the instances one by one and deploy the build on that server (in order to prevent downtime).
I am looking for a Jenkins plugin (if available) which could help me in automating that task which could take out one instance from ELB, deploy the latest build and then again register that instance to the ELB and repeat the same steps for all the instances.
NOTE: Instances can be dynamic in count as autoscaling can increase or decrease the instances behind the ELB.
If your build produces a docker container then you can use EKS or ECS to automate the deployment. Theses services will take care of deployment of your new version of service side by side without any downtime. Additionally you can also set the scaling policies to increase or decrease the no of service instances based on the load.

Docker/Kubernetes with on premise servers

I have a .NET core web API and Angular 7 app that I need to deploy to multiple client servers, potentially running a plethora of different OS setups.
Dockerising the whole app seems like the best way to handle this, so I can ensure that it all works wherever it goes.
My question is on my understanding of Kubernetes and the distribution of the application. We use Azure Dev Ops for build pipelines, so if I'm correct would it work as follows:
1) Azure Dev Ops builds and deploys the image as a Docker container.
2) Kubernetes could realise there is a new version of the docker image and push this around all of the different client servers?
3) Client specific app settings could be handled by Kubernetes secrets.
Is that a reasonable setup? Have I missed anything? And are there any recommendations on setup/guides I can follow to get started.
Thanks in advance, James
Azure DevOps will perform the CI part of your pipeline. Once it is completed, Azure DevOps will push images to ACR. CD part should be done either directly from Azure DevOps (You may have to install a private agent on your on-prem servers & configure firewall etc) or Kubernetes native CD tools such as Spinnaker or Jenkins-X. Secrets should be kept in Kubernetes secrets.

Sharing Agent Queue For Prod and Stag Environment in TFS Build/Release Definition

I am trying to setup build and release definition in TFS 2015. I have set up multiple agent queues for different environments Staging, Production, Load, UAT. I have different physical agents for each of this environment and each agent has permission to connect to respective environment to deploy code.
My Question is how do i share agents over these environments. Is it possible to have one agent which has permission to all these environment and can deploy code to IIS website. My websites name is also same in each environmetn. For e.g. abc.com (UAT), abc.com (PROD).
TFS version is 2015.
Fundamentally there's nothing stopping it, you will need to look at a couple of things though.
First does the agent/VM have access to all the environments? Often environments are in different AD domains, so you may have an agent that is in/can see your UAT domain, but is unable to access the PROD domain. If that's fine, secondly, you will need to make sure that the user the agent is running on has permissions also, it may be the machine can see both domains, but the agent is running under an account like tfsagent#uat.domain, and your other agent runs under a tfsagent#prod.domain account.
If both the agent/VM and agent user can see both/all domains, then you need to consider security (what's stopping a dev changing a name, or the deploy process, and pushing something live with no oversight, etc?).

When do we need to have slaves for Jenkins and when we do not?

I am a beginner user of Jenkins. I am trying to putting a development process onto the DevOps pipeline that includes Jenkins, GitHub, SonarQube, IBM UCD.
It is not a very complicated deployment process and it uses windows machine.
There are three environments, QA, DEV, and PROD.
I know that I need to install one IBM UCD agent for each of those three, but do I need to have three slaves in Jenkins as well , or just one master in Jenkins could do that deployment for three environments ? Which way is better ?
Usually for the complex deployment process companies are using "Master+Agent" scheme, but in your case there is no need to create some advanced Jenkins system with master and agents if you can build it on one host and you have not any additional projects or restrictions.
From official documentation:
It is pretty common when starting with Jenkins to have a single server which runs the master and all builds, however Jenkins architecture is fundamentally "Master+Agent". The master is designed to do co-ordination and provide the GUI and API endpoints, and the Agents are designed to perform the work. The reason being that workloads are often best "farmed out" to distributed servers. This may be for scale, or to provide different tools, or build on different target platforms. Another common reason for remote agents is to enact deployments into secured environments (without the master having direct access).
For additional information you can read the following articles: this and this.

Resources