Whether an auto shutdown policy can be set to turn off Notebook VM?.Is this need to shutdown the VM Notebook by the user?
In the meantime....a team of MSFT architects developed the following Azure Samples solution to fill this gap:
https://github.com/Azure-Samples/AzureMLResourceGovernance
It uses Logic Apps to orchestrate Container Instances to perform Azure ML resource governance related tasks (such as scheduled shutdowns).
Auto-shutdown is currently not enabled for Compute Instances or Notebook VM but it is on our road map. Details will be provided in Azure Updates when this feature is in preview, in development, or becomes available. Thanks.
Currently Auto shutdown is on roadmap, Workaround While you can see and change settings for the underlying IaaS VM,if using Azure VM Auto-shutdown (also Automation Runbook) is to start the VM from the Azure VM resource blade first, then restart in the Azure ML Workspace UI.
Azure Notebooks starts the underlying virtual machine whenever you run a notebook or other file. The server automatically saves files and shuts down after 60 minutes of inactivity. You can also stop the server at any time with the Shutdown command (keyboard shortcut: h). Ref: https://learn.microsoft.com/en-us/azure/notebooks/configure-manage-azure-notebooks-projects.
Azure Machine Learning Compute Instance Auto shutdown is now in private preview - contact your local Microsoft representative to have it enabled.
Now there is a new Auto-Shutdown preview feature for the compute instance in Azure Machine Learning Service. More details here
Make sure to enable this preview feature as in below image.
Then check the compute instance/schedules option to see the new feature as below.
Note also we can control the auto-shutdown time on a subscription level using a built-in policy. details here
Related
I am trying to integrate the driver logs to Control-M scheduler.
How can i access real time driver log (not with a lag of 5 min) other than Azure databricks sparkUI. Using some API or accessing the location of real time written logs.
Also I am planning to do elastic analysis on top of it.
Such things (real-time collection of metrics or logs) are usually done via installation of some agent (for example, filebeat) via init scripts (global or cluster-level init scripts).
The actual script content heavily depends on the type of the agent used, but Databricks' documentation contains some examples of that:
Blog post on setting Datadog integration
Notebook that shows how setup init script for Datadog
Google App Engine flexible allows you to deploy docker containers... how does scaling manifest itself?
Will a new VM be spun up each time the application needs to scale or can it spin up new container instances on an existing VM?
Can individual containers scale independent of each other? e.g. product container is under load but customer is not so only a new product container is spun up?
I realize GKE would be a better option for scaling containers, but I need to understand how this works on GAE for a multitude of reasons.
App Engine flex will only run one of your app container per VM instance. If it needs to scale up, it'll always create a new VM to run the new container.
As per your example, if you want to scale "product" and "customer" containers separately, you'll need to define them as separate App Engine services. Each service will have its own scaling set up and act independently.
If you have containers, you can have a look to Cloud Run, which scale to 0 and can scale up very quickly (there is no new VM to proviion, that can take several seconds on AppEngine Flex).
However, long run aren't supported (limited to 15 minutes). All depends you requirement in term of feature, portability, scalability.
Provide more details if you want more advices.
Google App Engine is a fully managed serverless platform, where you basically submit a code and GAE will manage the underlying infrastructure and the runtime environment (for example the version of a python interpreter). You can also customize the runtime environment with Dockerfiles.
In contrast, GKE provides more fine-grained control on your cluster infrastructure. You can configure your computer resources, network, security, how the services are exposed, custom scaling policies, etc. GKE can be considered a managed container orchestration plaform.
An alternative to GKE that can provide even more control is creating the resources you need in GCE and configuring Kubernetes by yourself.
Both GKE and GAE are based and priced on compute engine instances. Google Cloud Functions, however, is a more recent event-driven serverless service. GCF is great if you want to execute code on an event-driven basis (for example, sending a confirmation email after a user registers).
In terms of complexity and control over your code's environment I would order the different Google services as:
GCE(Compute Engine) > GKE(Kubernetes Engine) > GAE(App Engine) > GCF(Cloud Functions)
One point to consider is that the more low-level you go the easier it is to migrate your service to another platform.
Given that you seem to be deploying only containerized applications, I would recommend giving GKE a try, specially if you want to have a cluster of multiple services that interact with each other.
In terms of scaling, GAE will scale only VM instances and you have only one app per VM instance.
In GKE you have two types of scaling: container scaling and VM instance scaling. You can have multiple containers in one instance and those containers can be different apps. Based on limits you define (such as the CPU used in an app) GKE will try to efficiently allocate the containers across the instances of your cluster.
Given that you can upload docker images to both the app engine and the Compute Engine, what's the real difference for a person that always contains their apps in docker?
According to a similar question the difference boils down to PaaS vs IaaS, but with a docker file you always specify the OS and runtime environment. So the only difference I see is you might end up over specifying on App Engine by giving a container.
Fundamentally, if you just want your app to scale seamlessly and quickly without much input, use App Engine Flex. If you want more control that you can configure in different ways using other Google products, consider an unmanaged instance group with Compute Engine.
Some history
It's worth noting that the ability to add Docker images to Compute Engine instances was an extremely recent development. And before that, App Engine Flexible was the new kid in town because we used to only have App Engine standard which definitely didn't allow you to use Docker as a base.
Key Differences
Here are the key differences in my experience:
App Engine is designed as a PaaS product and so you can customize scaling parameters in your app.yaml and App Engine reads those and takes over from you. It's technically true that you can do this using Compute Engine but it involves more configuration - you need to set up an instance group, a backend and a frontend. With App Engine, all of that is taken care of for you
You can't setup any load balancers or any peripheral services or products on top of App Engine. App Engine Flexible went quite a way further to give the user more control and more (sorry) flexibility. However, it doesn't allow unfettered integration with other services. Whereas with Compute Engine, you can setup an HTTPs load balancer, add your machines to different networks and subnets, set custom tags etc...
On an additional note, a more detailed explanation on the difference between App Engine and Compute Engine can be found here.
App Engine is PaaS service and managed service from google. It has options as dynamic instance and resident instance to achieve scaling. It has predefined runtime setup for the supported languages, we just need to execute the code. It supports execution of multiple versions of the applications simultaneously, so we can release the code changes for separate group of users. It's inherently support container environment, so the basic details like number of pods or nodes need no to provide. Even for the App Engine Standard if there is no load than the node count reduced to zero means no cost. App Engine Flexible requires at-least one up instance. Here the deployment can be done by a single command i.e. gcloud app deploy app.yaml
Compute engine is IaaS so developer needs to create a machine, setup the desired software's, setup the docker container environment. There will be no scaling, version management, traffic control,security, firewall, health monitoring and repair etc. So with compute engine it's really tough to achieve the capabilities provided by App Engine. The batter alternative is Kubernetes Engine.
I have my own custom application. It works with Apache Kafka and has two main parts: Producer and Consumer.
Is there a possibility to monitoring all running producers and consumers in Cloudera Manager (like a DataNodes of HDFS)? First and main feature that I need is showing of status of every instance (started or stopped).
Or maybe there are some another soft (except Cloudera Manager) to perform my request?
Thanks
You can/should be using an APM tool. The company I work for is AppDynamics, we provide deep transaction tracing and monitoring among other things for these types of apps. There are other leading tools in APM as well for example Dynatrace and New Relic.
I want to map a clearcase view on network drive inside a windows service.
I have tried with net use command, but it did not work properly.
You should be able to run the same kind of command than the one used when paths are too long, which is subst:
subst X: c:\path\to\my\View # for snapshot view
subst X: M:\myView # for dynamic view
in order to map a view to a drive letter.
This should work from within a service, provided:
you are using your Windows account (and not the "Local System account")
the dynamic view is already started (and visible in the M:\ MVFS mounting point drive)
I wish this approach would work, but it really doesn't from a service; I've beat on this problem pretty intensely to no avail. The problem is two-fold:
From a Windows service, to be able to map drives visible to other users it has to "Log on" as the "Local system" account (default) with the "Interact with desktop" property set.
To be able to talk to ClearCase, the Windows service process has to "Log on" as a normal user with ClearCase access (e.g. in the atria group typically).
So (1) and (2) are mutually exclusive, but you need to do both and can't. For (2), presumably the reason you can't "Interact with desktop" and map drives there is because you'd need a logon session / token which has to be present for mapped drives to work --associated per-user session--but services need to be able to run headless (no one logged in) where there is no "session" / token that exists.
Note that the way Rational BuildForge solves this for ClearCase is by spawning an entirely new child-process solely to allow its' service to talk to ClearCase:
https://www-304.ibm.com/support/docview.wss?uid=swg1PK50021
Also note that the "logon session" is identified by a unique token; this means that even if you have a process running as your desired user (domain\fred) that can access ClearCase, spawning a new process from there as the same user (domain\fred) may not have the same session token by default, depending on how it was created (i.e. CreateProcess() vs CreateProcessAsUser() vs CreateProcessWithLogonW()), making it ever more difficult to deal with tools you don't control. To demonstrate this, try running 'runas /user: "cmd /k \"net use\""' from a command prompt and you'll see all your network drives listed as "Unavailable"(!!).
It is possible (though explicitly not recommended by Microsoft), with great effort, to get this all to work if you can somehow manage to have a user always logged in from which to get their session token, as described here:
starting a UAC elevated process from a non-interactive service (win32/.net/powershell)
Otherwise, you'd have to emulate it like BuildForge does.
Also see:
Network drive is unavailable if mapped by service
Map a network drive to be used by a service
For this sort of problem I've typically run into it with CI servers (CC.NET / Hudson / TeamCity) that run as a Windows service. What I've had to do is ensure that somewhere before my real "job" was started, I scripted a way to map network drives by re-mapping them at runtime or mapping M:\ to an available drive letter with subst (very tedious) as VonC describes, which isn't persistent (even if you use 'net use /persistent:yes') which is what I'm guessing you were hoping for too.