MLOps - How to refresh a ML model - machine-learning

The image below show the two pipelines that we have in my company to manage the life cycle of a model.
The first pipeline, "Application", relates to the creation of the application component that host the model and has the inference logic.
The second, that of "Model", is a pipeline that leads to the generation of the model in binary format.
Together (model and application) will be deployed in our orchestrator (a kubernetes cluster).
I am in a situation where the application logic does not change but the models do.
I could find myself in the situation below.
I suppose there are two approach to manage the runtime model refresh on the orchestrator (hope someone suggests me other possibilities that I haven't thought about):
In the application logic; The code manages the refresh through a thread by taking the new model.
Pros: A new container is not generated
Cons: Ability to introduce a bug.
Through the pipeline; The pipeline must be triggered by an event (in my case a merge on a git branch) and bring the container by performing a rolling update. The new container at run will load the new model.
Pros: Existing process
Cons: Each new version of the model should provide for a new build of the container, even if the application logic has not changed.
[Question]
Are there any best practices for these cases (perhaps through a system of tags on the images) that someone can suggest me?
Thank you
Kipliko

The most seamless way is to do a rolling update via k8s/kubectl. This will require a new container, however, this is considered a best practice, as each container stays atomic and reproducible. Updating the model via threads would be difficult to debug.
Another scenario you could do is blue-green deployment using Istio, and slowly move traffic between the old and new model, although this would require a bit more overhead.

Related

Durable tasks sub orchestration with micro services

I'm attempting to use azure durable tasks to orchestrate some microservices but am running into a small gap in understanding how taskhubs work as well as coordinating the projects correctly.
I'm trying to create a main orchestrator that is in charge of kicking off sub orchestrations to do the actual work. Below is a diagram of what I'm trying to achieve.
The idea is that each .net Project will be able to scale independent of the other, so if .Net project 2 was under quite a bit of load I'd be able to scale that project only and not have to worry about the other 2 projects. The problem I'm running into is from what I understand the taskhub queue is shared by all the services so there is no way to have each process focus on only it's work, meaning each project can see everything in the queue and it may cause 1 project to dequeue a message intended for project 2. Is this correct?
From reading the documentation it doesn't seem clear that I can send project 2 it's sub orchestration messages as well as send project 3 it's specific orchestration.
Am I thinking about this problem incorrectly, is there a different way I might want to approach this?
What you want cannot be achieve.
As of now, Azure Function only allow orchestrator functions to call activity and sub-orchestrator functions that exist in the same function app. The main reason is a technical one: queues within a task hub are shared across all functions, so there's no way to guarantee that a message intended for FunctionAppA does not get picked up by FunctionAppB.
If cross-project communication is required, the correct method is to use http or queue.

delayed_job: One job per tenant at a time?

I have a multitenant-Rails app with multiple delayed_job workers.
In order to avoid overlapping tenant-specific work, I would like to separate the workers from each other in such a way that each one works on only one tenant-specific task at a time.
I thought about using the (named) queue column and add "tenant_1", "tenant_2" and so on. Unfortunately the queues have to be named during configuration, so this principle is not flexible enough for many tenants.
Is there a way to customize the way delayed_job picks the next task? Is there another way to define a scope?
Your best bet is probably to spin a custom solution that implements a distributed lock - essentially, the workers all run normally and pull from the usual queues, but before performing work check with another system (Redis, RDBMS, API, whatever) to verify that no other worker is yet performing a job for that tenant. If that tenant is not being worked, then set the lock for the tenant in question and work the job. If the tenant is locked, don't perform the work. It's your call on a lot of the implementation details like whether to move on to try another job, re-enqueue the job at the back of the queue, whether to consider it a failure and bind it to your retry limits, or do something else entirely. This is pretty open-ended, so I'll leave the details to you, but here are some tips:
Inheritance will be your friend; define this behavior on a base job and inherit from it on the jobs you expect your workers to run. This also allows you to customize the behavior if you have "special" cases for certain jobs that come up without breaking everything else.
Assuming you're not running through ActiveJob (since it wasn't mentioned), read up on delayed_job hooks: https://github.com/collectiveidea/delayed_job/#hooks - they may be an appropriate and/or useful tool
Get familiar with some of the differences and tradeoffs in Pessimistic and Optimistic locking strategies - this answer is a good starting point: Optimistic vs. Pessimistic locking
Read up on general practices surrounding the concept of distributed locks so you can choose the best tools and strategies for yourself (it doesn't have to be a crazy complicated solution, a simple table in the database that stores the tenant identifier is sufficient, but you'll want to consider the failure cases - how to you manage locks that are abandoned, for example)
Seriously consider not doing this; is it really strictly required for the system to operate properly? If so, it's probably indicative in an underlying flaw in your data model or how you've structured transformations around that data. Strive for ACIDity in your application when thinking about operations on the data and you can avoid a lot of these problems. There's a reason it's not a commonly available "out of the box" feature on background job runners. If there is an underlying flaw, it won't just bite you on this problem but on something else - guaranteed!
If you are trying to avoid two different workers working on the same tenant then that's a bad design choice. something is smelling. fix that first. however, if you want the same kind of worker instances working on different tenents below is the easiest solution. These relationships are my hypotheses.
ExpiredOrderCleaner = Struct.new(:tenant_id) do
def perform
Order.where(tenant_id: tenant_id).expired.delete_all
end
end
Tenant.each do |tenant|
Delayed::Job.enqueue ExpiredOrderCleaner.new(tenant.id)
end
this will create unique jobs for each tenant. single worker instance will work on a specific tenant. however, there can be other kinds of jobs working on the same tenant. which is good as it should be. if you need to more smaller scope, just pass more arguments for the worker and use in the query and use database transactions to avoid collisions.
these best practices are true for any background worker.
Make your job idempotent and transactional means that your job can safely execute multiple times
Embrace Concurrency design your jobs so you can run lots of them in parallel
your work will be a lot easier if you use apartment gem and active job wrappers. see the examples from there documents.

Best strategy: Deploy new code to existing instances or new ones?

Thoughts wanted: When deploying new code, are there benefits to deploying to brand new instances (droplets on Digital Ocean) versus deploying to existing instances?
With the ease of deploying new instances nowadays, I'm wondering if the better route to go now is to simply deploy a new instance and change the routing to the new instance...
I know that whether you use Chef or other deployment tool may dictate a solution, but looking for the general thoughts on the topic....
It all depends on how big your infrastructure is and how much work you will need to debug your system in the case of show stoppers and bugs. deploying to existing machines on a small scale can be maintained to some extent but when the footprint of your system becomes big, it should be time for Immutable infrastructure ; where you are deploying new machines and getting rid of the old ones. instead of the approach of applying new changes on existing ones.
The benefits of this approach are:
Easier more capsulated deployments on the machine level. you attach the new instances, wait for traffic to stop on the old ones and then take them off.
Logging is more centralized for all your machines, that is handy in the case of debugging; where the issue is coming from, and which machine is creating this bad behaviour.
Flexibility to kill servers that are giving problems and creating
new ones on the fly to connect them to your load balancer.

Automating Azure VIP Swap

I have an ASP.NET MVC 4 app hosted as an Azure web role. I want to do something that seems like it should be pretty standard: I want to create a function that I can call that initiates a VIP swap and raises and event (or calls a callback) when the VIP Swap operation is done.
Just to add some context to the situation: My website implements a workflow that takes about an hour (or less) to complete. If I want to release a new version of the website code, it's convenient (i.e. much less "backward compatibility" code to write) to first let all of the current users complete the workflow so that the new code doesn't need to deal with data created by the previous version of the code. So a management function in my website would first poke a value into the database that disables new workflows; it would then wait until all current workflows are done; it would then call the "VIP Swap" routine; finally, when the VIP Swap routine signals its completion, it would poke the database value to re-enable new workflows.
I found the Microsoft documentation for how to programmatically initiate a VIP swap here:
http://msdn.microsoft.com/en-us/library/ee460814.aspx
The procedure involves POSTing to a magic URL and including some headers in the POST, then periodically performing a GET to a magic URL and checking the response code.
The more I think about this, the more non-trivial it seems. In addition to the basic complexities of wiring up a background timer and completion notification, I don't know what complexities, if any, I might run into trying to do this stuff in the IIS environment. Can I even perform HTTP operations on a background thread? For that matter, will I run into complications just trying to use any of the half dozen or so different "do things in the background" mechanisms baked into .NET?
Any help or guidance will be greatly appreciated. In particular, I'd be ecstatic if someone could point me at a ready-to-go implementation of this function!
I don't think you will find an easy solution to this as the fabric controller is setup to do some very fancy things without your involvement. Running hour-long workflows on a cloud computing environment, where an instance can be pulled out from underneath you, (with a maximum of 5 minutes from the OnStopping event being called to clean up) requires that you do other work anyway to make sure that all of your tasks complete.
The simple question is "What do you do if an instance goes down when workflows are still running?" Do you restart them or are they lost? If they get lost then you don't care anyway, so killing workflows for an upgrade are equally unimportant. If you re-start them then use that same mechanism to decide whether or not a node is due to be shut down, and distribute the jobs accordingly. This pattern is eerily similar to the Hadoop JobTracker. Don't just run the workflows on any 'ol instance. Submit them to a (job tracker) service that decides what to do. The (job tracker) service can then use the service management API to scale up as many instances as you need running the version that you want, run workflows on the appropriate node, and shut them down when they are no longer needed or are outdated.
Unfortunately this may not be the simple solution that you are looking for, but something in your architecture needs to change, rather than trying to force PaaS to fit with your current approach. Decompose your workloads, create loosely coupled services, design for failure, and a few other cloud/distributed computing practices need to be considered. There is a reason why Hadoop is built the way that it is — and it has a reputation for being able to get work done on a bunch of somewhat unreliable commodity hardware.

How to architect Rails site that can be edited while running?

I am writing a Rails app that "scrapes/navigates" some other websites and webservices for content. I am using Mechanize and Savon to do the heavylifting.
But given the dynamic nature of the web, I'd like to make my calls to these editable by the admin users of the site - rather than requiring me to release a new version of the site.
The actual scraping thread happens async to the website, using the daemons gem.
My requirements are:
Thinking that the scraping/webservice calling code is quite simple, the easiest route is to make the whole class editable by the admins.
Keep a history of the scraping code - so that we can fairly easily revert if we introduce a problem.
Initially use the code from the file system, but as soon as thats been edited and stored somewhere, to use that code instead.
I am thinking my options are:
Store the code in the db (with a history table for the old versions)
Store the code in a private git repo somewhere and access that for the history/latest versions.
I am thinking the git route might be easiest, given its raison d'etre is to track file history...
But perhaps there is a gem/plugin that does all this for me, out of the box?
Thanks in advance for any tips/advice.
~chris
I really hope you aren't doing something like what's talked about here...
Assuming you are doing a proper mixin, there used to be a gem called "acts_as_versioned" which would do something like you want. It's been a while so I don't know if it's been turned into a plugin or if it's been abandoned. Essentially the process it uses was to provide a combination key for your versioned table.
Your database would have a structure like this:
Key column (id for the record)
Version column (id for the record's version)
All the record attributes
Let's say you had a table for your scripts, and the script you wanted has three versions. Your table would have the following records:
123, 3, '#Be good now'
123, 2, 'puts "Hi"'
123, 1, '#Do not be bad'
Getting the most recent version would be as simple as
Scripts.find :first, :conditions=>{:id=>123}, :order=>"version desc"
Rolling back would be as simple as removing the most recent version, or having another table with a pointer to the active version. It's up to you.
You are correct in that git, subversion, mercurial and company are going to be much better at this. To provide support, you just follow these steps:
Check out the script on the server (using a tag so you can manage what goes there at any time)
Set up a cron job to check out the new script periodically (like every six hours or whatever you feel comfortable with)
The daemon you have for running the script should run the new version automatically.
IF your site is already under source control, and IF you're running under mod_rails/passenger, you could follow this procedure:
edit scraping code
commit change locally
touch yourapp/tmp/restart.txt
that should give you history of the change and you shouldn't have to re-deploy.
A bit safer, but not sure if it's possible for you is on a test/developement server: make change, commit locally, test it, then on production server, git pull then touch tmp/restart.txt
I've written some big spiders and page analyzers in the past, and one of the things to keep in mind is what code is providing what service to the entire application.
Rails is providing the presentation of the data being gathered by your spidering engine. The presentation is one side of the coin, and spidering is the other, and they should be two separate code bases, tied together by some data-sharing mechanism, which, in your case, is the database. The database gives you some huge advantages as does having Rails available, when your spidering code is separate. It sounds like you have some separation already, but I'd recommend creating a wider gap. With that in mind, here's how I've done it before, and what I'd do now.
Previously, I had a separate app for my spidering that was spawning multiple spider tasks. Each task would look at a bunch of different URLs, throw their results in the database, then quit. Each time one quit the main app would spawn another spider to process more URLs. Each loop, the main app checked a YAML configuration file for run-time parameters, like how many sub-tasks it should have running, how many URLs they'd get, how long they'd wait for connections, etc. It stored the last modification date of the config file each time it loaded it so, if I made a change to the file, the app would sense it in a reasonably short time, reread the file, and adjust its behavior.
All state information about the URLs/pages/sites being scraped/spidered, was kept in the database so I could check on its progress. I could see how many had been processed or remained in the queue, the various result codes, and the content being returned. If I didn't like something I could even tweak the filters to skip junk pages, knowing the spidering tasks would be updated in a few minutes.
That system worked extremely well, spidered a major customer's series of websites without a glitch, running for several weeks as I added new sites to the list. (We were helping one of the Fortune 50 companies improve their sites, and every site had been designed and implemented by a different team, making every site completely different. My code had to be flexible and robust; I was really happy with how it worked out.)
To change it, these days I'd use a database table to hold all the configuration info. That way I could easily build an admin form, and let someone else inherit the task of adjusting the app's runtime configuration. The spider tasks would also be written so they'd pull their configuration from the database, rather than inherit it from the main app. I originally had the main app do all the administration and pass the config info to the spidering apps because I wanted to keep the number of connections to the database as low as possible. I was using Postgres and now know it could have easily handled the load, so by letting the individual tasks handle their configuration I could have made it more responsive.
By making the spidering engine separate from the presentation engine it was possible to temporarily stop one or the other without affecting the progress of the spidering job. Once I had the auto-reload of the prefs in place I don't think I had to stop the spidering engine, I just adjusted its prefs. It literally ran for weeks without stopping and we eventually pulled the plug because we had enough data for our needs.
So, I'd recommend tweaking your code so your spidering engine doesn't rely on Rails, instead it will be fired off by cron or a separate scheduling app. If you have to temporarily stop Rails your engine will run anyway. If you have to temporarily stop the engine then Rails can continue serving pages. The database sits between the two acting as the glue.
Of course, if the database goes down you're hosed all the way around, but what else is new? :-)
EDIT: Chris said:
"I see your point about the splitting the code out, though my Ruby-fu is low - not sure how far I can separate things without having to have copies of the ActiveModel/migrations stuff, plus some shared model classes."
If we look at your application as spider engine <--> | <-- database --> | <--> Rails/MVC/presentation, where the engine and Rails separately read and write to the database, and look at what each does well, that helps figure out how to break them into separate code bases.
Rails is designed to handle migrations, so let it. There's no reason to reinvent that wheel. But, how often do you do migrations, and what is effected when you do? You do them seldom once the application is stable, and, at that point you'd do them in a maintenance cycle to tweak the database. You can shut down the spidering engine and the web interface for a few minutes, migrate the database, then bring things up and you're off and running. Migrations are a necessary evil, but are hardly show-stoppers once in production. Most enterprises have "Software Sunday", or some pre-announced window of maintenance, so do the same.
ActiveRecord, modeling and associations are pretty easy to deal with too. The models are in a file that is required internally by Rails already, so the spidering engine can inherit the database know-how that way too; Multiple apps/scripts can use the same model file. You don't see the Rails books talk about it much, but ActiveRecord is actually pretty easy to use outside of Rails. Search the googles for activerecord without rails for more info.
You can pull in ActiveSupport also if you want some of its extensions to classes by doing a regular require, but the Rails "view" and "controller" logic, which normally applies to presenting the web interface, shouldn't be needed at all in the engine.
Business logic, which goes in the controllers in Rails could even be refactored into separate methods that get required by the Rails side of things and by the spidering engine. It's a different way of looking at Rails but falls in line with the "DRY" mantra - don't repeat yourself, so make things modular and require (or require_relative) bits and pieces that are the building blocks of the entire system.
If you don't want a totally separate codebase, you can take advantage of Rail's script runner, which gives a script access to the ActiveRecord::Base and ActiveRecord::Associations and ActiveSupport. Do a rails runner -h from your app's main directory, or search for "rails runner" for more info. runner is not good for a job that starts and runs many times an hour, because Rail's startup cost is high. But, if you have a long-running task, say one that runs in parallel with your rails app, then it's a great choice. I'd give it serious consideration for the spidering side of your application. Eventually you might want to break the spidering-engine out to a separate host so the presentation side has a dedicated host, so runner will help you buy time and do it in small steps.

Resources