Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
I am about to develop a program which will be installed and run as a Windows Service on the back-end servers (with internet access) of several clients. I do not have physical access to the servers. What I am working on is a scheme for reliably rolling out updates to the program.
I have spend quite a lot of time looking for best practices for Windows Service auto-updates, but have found very little useful information. Most techniques, such as ClickOnce, seem to cater primarily for Windows Forms-based applications.
My current plan is:
When updates are available, have the running service download, install, and start an update service.
The update service will download the new version of the main service and install it side-by-side of the current, still active version.
The update service next stops the old version and then starts the new version of the service.
The new version disables the update service during startup.
Some important concerns for me are:
Being able to handle updates both to the main service and the update service
Being able to handle interruptions such as power failure or a connection failure at any point during the update process
What am wondering is if this a common way to solve this problem and/or if I am missing anything crucial. Is it too complicated, or perhaps far too simplistic? How would you have done it (or, even better, how have you done this successfully)?
Thanks!
I was faced with the same problem last year and I wrestled with a lot of the same questions. A few bits of advice:
Since either service may be updated,
both services will function as
updaters of the other. Service A will
update Service B and vice versa. For
this reason, I suggest simply running
both services at all times. Unless
you are worried about really loading
down your server with
does-update-exist calls, the
enable/disable service management
isn't worth the overhead.
Like services can't be installed on a
single machine. In other words, you
can't install new and old versions of
the service side-by-side if they have
the same name. Unless you want to
complicate your update process, I
suggest you uninstall the old version
and then install the new version.
For example, Service A would download
Service B Installer, uninstall
Service B, install new version of
Service B and then run. Service B
would do the same for Service A.
Since each service is managing the
other, they should not only check for
available updates but they should
verify each other's health. For
example, Service A would check to see
if Service B exists and if it is
running. If the health check fails,
a list of steps to resolve the issue
and get the service running would be
completed by Service A. Executing the
health check and recovery operations
will cover you now matter what issue
arises with the update, initial
install or general operations.
Do ample logging on both the client
and the server. You'll want to track
what actions were taken and when.
For example, Service A might when it
is checking for updates, when it is
executing the health check and
associated actions. On the service
(assuming you are calling into a web
service looking for updates) track
the calls made by each Service. If
your Services aren't getting updated
at least you'll have a trail of
breadcrumbs (or lack of breadcrumbs)
pointing you to the problem.
There are literally bunches of potential gotchas with a solution of this sort: services not running at startup, UAC getting in the way, not being able to install and uninstall the services with the same user, ensuring user installing the service has ample permissions, connectivity loss, getting the .NET Framework installed on the client machine, handling reboot after install if necessary, etc.
Best of luck. It's a fun problem to solve, but it doesn't go without it's frustration -- especially since, as you said, there isn't a lot of documented information available.
Related
This question already has answers here:
Docker Compose wait for container X before starting Y
(20 answers)
Closed 2 years ago.
My Dockerfile starts the application using the following command
CMD ["/home/app/start-app.sh"]
start-app.sh contains the following, it waits until RabbitMQ server is available. Is there a better way of achieving this for applications running using docker-compose, k8s
while ! nc -z "$RABBITMQ_HOSTNAME" "$SPRING_RABBITMQ_PORT"; do sleep 10; done
I guess not. Even though docker compose provides a depends_on it just starts the containers in the dependency order. It does not make sure that a specific service is available. The docker documentation explains it in this way that I can't do better:
The problem of waiting for a database (for example) to be ready is really just a subset of a much larger problem of distributed systems. In production, your database could become unavailable or move hosts at any time. Your application needs to be resilient to these types of failures.
To handle this, design your application to attempt to re-establish a connection to the database after a failure. If the application retries the connection, it can eventually connect to the database.
You already do what is needed if you have to wait for another service to become ready. But like they say, it is better to develop applications that can handle downtime of other services, because downtime can happen anytime. That's why netflix developed the chaos monkey. One might argue now that handling these possible states makes applications so much harder to develop. That's true, but it's just the complexity of distribution. One can either ignore it: "this will never happen" and I guarantee it will or accept that it can happen and do the right thing. Maybe this decision depends on the risk and the damage it will produce when it happens to your company.
Services or microservices give us a lot of decoupling and flexibility, but they come at a coast that we should keep in mind.
Our backend is containerised with docker for use with minikube, I was wondering if as an iOS developer I can take advantage of this by running the backend locally on my laptop rather than having to communicate with a staging cloud based environment which can often be flaky.
Am I misunderstanding how this technology works, or would this be a viable and useful case for docker in iOS development, speeding up request and response times and allowing more control over the state of the backend I am building against?
Thanks for any clarity on this idea
What you’re explaining is possible and is something I do in my day job regularly so as to possibility, yes you can do this.
The question of whether this raises any benefit is broad and depends on every individuals needs. If you are finding that your cloud instance is extremely slow at the moment and you don’t have capacity to improve its performance, a locally run docker instance could very well help with this.
One thing to keep in mind though is that any changes you make to a local instance/server in order to make the app work as expected will need to be reflected into your production instance as soon as your app goes live to the public otherwise you will see undesired behaviour due to the app relying on non-existent server configs.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I understand docker containers features like app isolation, Server Consolidation, ... but I am wondering what are the real world use cases for Containers. How are you using containers in Production environment? Are they being used to host web services and web sites? Can you please give me some examples?
Thanks
First decoupling and distributing applications into microservices.
So I'll give you an example I work with where you have the old method of deploying an application lets say a rails application that's a monolith application, i.e with the db, web app, a background task scheduler like sidekiq(in the case of python RabbitMQ), and redis to hold temporal (key, value) data. All this services running on a single server would mean this application has a single point of failure at any given moment. If something were to fail means that your whole application is offline while you rush off to trace what failed. If you had taken your application split it into small pieces known as a micro-service where each part of your application is packed into a container with the libraries it needs then deployed independently and dynamically to form part your overall application. Decoupling your application into these manageable components and reducing the probability of your application ever failing entirely. Furthermore, decoupling allows container-based applications to be deployed easily and consistently, regardless of whether the target environment is a private data center, the public cloud, or even a developer’s personal laptop.
Dev and Ops separation of concerns
Take the part where most people build a release during deployment and may sometimes fail during deployment. With a container you get to build your image before release time which and you can run an environment similar to the one in production with no guesswork and this makes it easy for you to roll out more frequently without concern of failure/ or rather reduced. it also increases the consistency to which release/build your application.
Containerization provides a clean separation of concerns, as
developers focus on their application logic and dependencies, while IT
operations teams can focus on deployment and management without
bothering with application details such as specific software versions
and configurations specific to the app.
Just have a read on these two links and it should clear up some stuff on your end.
Containers 101 google and this
Kubernetes
I made a Software using Python then I convert it to an exe application using cx_Freeze anyways I'm trying to make an msi installer for my application using advanced installer, and use the time limitation trial in advanced installer on my application? And is there any alternative that can do this?
First, make sure that you have an Enterprise or higher license. Then, follow the instructions in this image (you may want to change the last three steps and maybe change the last step) by clicking on every specified button and correctly setting every specified field:
The Display Name and other fields will be automatically filled out. Feel free to customize further, but this should just work.
I have found it easier to provide this functionality in the application (especially if you have a company web site that can provide web service calls). One of the reasons is that it seems fairer to start the clock at first use of the app, not at install time. If you are worried about users hacking into the time trial it's also more secure to make a web service call to your company's web site. So if you want to build your own this is the general idea:
The best solution is when the install medium has its unique CD key or license. The app passes that to your company's web service and say "first run of the app". That starts the clock that is kept at your company's server, the clock for this copy of the app at this particular customer's system. Later runs of the app call in to see if the clock has run out.
If there is no license or CD key, another way is to generate a signature from the software and hardware of the machine and pass that to the web service.
Cache the estimated expiry date somewhere as a fallback.
When the app starts, pass the license (or machine signature) information to the company web service to see if it's expired. If that's not available then use the cached expiry date instead. If that's not available then someone has tried to hack it, so don't let the app run.
If the customer tries to install the product on another system, the license/cd key from the install medium has already been used so the app won't work when installed. That's why a unique key per install is useful. This prevents that install license from being installed anywhere else (typically until the customer pays and the company database says they are ok to use that license).
If the customer uninstalls and tries to reinstall on the same machine, that hardware signature has been seen before too, so if there is no unique license key the machine signature can be used to detect re-use. There isn't a way to stop it running on another system if the customer has more than one, and again that's why a unique license id per install medium works best.
In the end the issue is how much you care and how much you want to stop the customer running the app. In many cases a severe nagging message at start up and other points while the app runs can be enough.
I work in an IT department that is divided into two groups. One group develops and manages applications, the other manages the company's infrastructure and servers. One of the problems we face is a break down in communication. I work for the application group and one of the problems I have is not being notified when a server is taken down by infrastructure, or a database is being refreshed.
Does anyone have suggestions on how to improve communications between the two groups or any ideas on how to keep a light-weight log across multiple systems (both linux and windows)? Ideally it would be nice if we could have our boxes just tweet their statuses or something.
Thanks for the help,
Ben
One thing you could do to communicate server status is to have our Infrastructure group setup a network monitoring system like Nagios. This will give everyone in your application group the ability to get a snapshot view of the status of every server in the system. Having this kind of status is invaluable when you are doing development.
Nagios gives you network monitoring, but also allows you to show scheduled down time for a particular server in the system.
Another thing your group could do to foster communication with the Infrastructure is to have your build system report which servers it is currently using for building and testing your products.
Also, setting up regular meeting between stakeholders of both groups is probably a good idea too. If you all are talking to each other, even for 15 minute a week, you'll probably see incidents like the one you described above go down quite a bit.
I think this is a bigger issue of change control.
You should have hardware and software change control and an approval process.
Ultimately, infrastructure serves you - the purpose for IT infrastructure is to run applications.
In my current large financial data company, servers are not TOUCHED without proper authorization through the client and application groups. It seems like a huge pain, but every single server is there for a reason - to meet a specific business goal and run a specific application. There is simply no excuse for the infrastructure group to be changing things or upsetting servers on their own volition.
Response to critical hardware failure might be an exception.
Needed software and OS updates are handled through scheduled maintenance windows and an approved change process.
I like the Nagios idea as well. If you want to setup something that's more of a communication tool, I would recommend a content management system like Drupal.
We use Drupal internally to communicate between teams. When one team takes a server down, they would add an event into Drupal. The rest of us would either get it as an email, an RSS item or just by refreshing the page.
Implement a change control process where changes are submitted, approved and scheduled for BOTH groups. This lets everyone know what is going on. This process can be as light or heavy-weight as you want.