I am working with Azure Databricks jupyter notebooks and have time-consuming jobs (complex queries, model training, loops over many items, etc.).
Every time I close the browser (or step away for a long time) of some running notebook, even before the cluster time-out time, notebooks and cluster will become idle. When I open the notebook again, it appears as if it the notebook resumes, but it does not continue executing. I'd like to keep the notebook running in the background at night or when I close the browser.
After opening the notebook the next day, this is what the notebook will show, as if trying to resume. It will stuck here.
Any thoughts ?
Related
I deployed a Vue.js and a Kotlin server app. Cloud Run does promise to put a service to sleep if no request to it arise for a specific time. I did not opened my app for a day now. As I opened it - it was available almost immediatly. Since I know how long it takes to spin up when started locally I kinda don't trust the promise that Cloud Run really had put the app to sleep and span it up so crazy fast.
I'd love to know a way how I can really see how long it took for the spinup - also for startup improvement for the backend service.
After having the service inactive for some time, record the time when you request the service URL and request it.
Then go to the logs for the Cloud Run service, and use this filter to see the logs for the service:
resource.type="cloud_run_revision"
resource.labels.service_name="$SERVICE_NAME"
Look for the log entry with the normal app output after your request, check its time and compare it with the recorded time.
You can't know when the instance will be evicted or if it is kept in memory. It could happen quickly, or take hours or days before eviction. it's "serverless".
About the starting time, when I test, I deploy a new revision and I have a try on it. In the logging service, the first log entry of the new revision provides me the cold start duration. (Usually 300+ ms, compare to usual 20 - 50 ms with warm start).
The billing instance time is the sum of all the containers running times. A container is considered as "running" when it process request(s).
We have a small script that scrapes a webpage (~17 entries), and writes them to Firestore collection. For this, we deployed a service on Google Cloud Run.
The execution of this code takes ~5 seconds when tested locally using Docker Container image.
The same image when deployed to Cloud Run takes over 1 minute.
Even simple command as "Delete all Documents in a Collection", which takes 2-3 seconds locally, takes over 10 seconds when deployed on Cloud Run.
We are aware of Cold Start, and so we tested the performance of Cloud Run on the third, fourth and fifth subsequent runs, but it's still quite slow.
We also experimented with the number of CPUs, instances, concurrency, memory, using both default values as well as extreme values at both ends, but Cloud Run's performance is slow.
Is this expected? Are individual instances of Cloud Run really this weak? Can we do something to make it faster?
The problem with this slowness is that if we run our code for large number of entries, Cloud Run would eventually time out (not to mention the cost of Cloud Run per second)
Posting answer to my own question as we experimented a lot with this, and found issues in our own implementation.
In our case, the reason for super slow performance was async calls without Promises or callbacks.
What we initially missed was this section: Avoiding background activities
Our code didn't wait for the async operation to end, and responding to the request right away. The async operation then moved to background activity and took forever to finish.
Responding to comments posted, or similar questions that may arise:
1. We didn't try experiment with local by setting up a VM with same config because we figured out the cause sooner.
We are not writing anything on filesystem (yet), and operations are simple calls. But this is a good question, and we'll keep it in mind when we store/write data
I have a standard situation, two distributed Erlang nodes, one master one standby.
When I stop the master the standby comes on - failover, when I start the master the standby stops - takeover. Everything works fine as long as heart is not turned on and there is no network split.
However, when I disconnect the master from the network after 60 seconds or so the standby gives me an error message ** removing (timedout) connection ** and starts up as if the master node stopped. This makes sense to me, it doesn't know if the master is alive or not, and epmd can't connect to the master node so it is removed from the nodes() list. Lets pretend for a moment that this is the desired outcome.
The problem is that, when the connection is restored, I have master and standby running at the same time and the standby is oblivious to the fact that the master is running. Pinging the standby during the masters init does not solve the issue. I checked nodes() on the standby after doing so, it sees the master node but still it continues to run.
My solution for now has been to create a process, that monitors all nodes that are above each node in hierarchy and if any of them are online, can be pinged, the process calls erlang:halt() to terminate the standby node. It works for simple situations, but maybe someone can tell me if there is a better way? I found a similar problem described on Elixir forum so it probably a known erlang problem without an easy solution. https://elixirforum.com/t/distributed-application-network-split/10007
If during a network split you don't want to have two nodes running in parallel I'm guessing an outside monitoring application needs to be used?
The second major issue is heart. If heart is turned on, as is, the failover never happens. If heart is running with a sleep before it calls start it stops the failover node when it calls the application start. So even when it can't start the master, do to it not having access to vital resources for example, it stops the failover node, and doesn't bring it back up after it fails to start the master. I don't know if heart is not supposed to be used with a distributed application or if there is an option to run some erlang code to check if the resources are available before attempting a start the node and before stopping the failover node?
The documentation on heart is not great. Very hard to find any examples of HEART_COMMAND. I found a way to set the HEART_COMMAND to a script from within my application, but there is a limit to how long the argument can be, and it's not as long as stated in the documentation from what I can tell. This for example sets a sleep timer for 60 seconds before calling application start again. It doesn't solve any issues, because in 60 seconds it stops the failover node and hangs if master node can't start.
heart:set_cmd("sleep 60; ./bin/myapp start")
The solution I've ended up with for now is letting heart of the main release start another release, a pre-loader, which does a preliminary check that all resources are available and if they are it starts the main release-application, and if they are not it continues checking forever. This way the main app is running on the failover node without interruption. So the main release has heart turned on, and the pre-loader does not. I ended up using a bash script file because I needed to do more work than I could fit in the heart:set_cmd/1, so I'm still calling heart:set_cmd(Dir ++ "/priv/myHeartScript.sh " ++ Arg1 ++ " " ++ Arg2), but don't get carried away with the Args as there is a limit on size! I also used Environment Variables which I set in vm.args using -env to pass data to the script, such as the pre-loader path/name. This allowed me to avoid having to edit the scrip too during deployment.
If anyone has a better solution PLEASE let me know.
UPDATE
The team at Erlang Solutions was kind enough to shed some light onto the subject. Basically, nobody they know uses the Erlangs built in distributed model. Everything revolves around the data, and as long as it is available on redundant databases you can spin up new applications anytime. They recommend using the cloud hosts that can spin up new servers when one goes down or use a redundant node design, so have 5 nodes up in parallel and if a few go down you can restart them manually or by other means.
As for me, I can say that getting heart to start a pre-loader release/app gets the job done but it gets complicated fast. To launch the app now requires provisioning several extra sys.config/vm.args/rebar.config files. I will be looking into their suggestions for the next iteration.
UPDATE
Moved away from using Erlang distributed model. Using RabbitMQ to send heartbeats to all nodes, including itself. If a node is receiving heartbeats from itself and no other node it's the master, if receiving more than one use any attribute like node name to chose the master. You don't have to use RabbitMQ, but you need to make sure all nodes can reach the same destination and consume from it.
Also, devOps oppose using heart. They prefer to use standard Linux tools to monitor application status and restart it after crash or a server reboot.
I have a continuous webjob running that reads messages from a queue, reads a file from Blob Storage, converts it, and then writes the converted file to a different blob container. All of the files are being converted properly. The Kudu site for my app service is running at nearly 100%. The process explorer in Kudu shows my webjob as the only other process running within that server. Conventional wisdom indicates that it is probably the webjob that is the problem. Are there any tools for determining what the problem might be?
Thanks!
The simplest approach is to download a process dump (From Kudu process explorer), and then analyze it locally, e.g. with windbg or Visual Studio. By looking at the threads, you should be able to identify which ones are actively spinning CPU.
Another simple test is to temporarily stop creating new blobs, and check whether that causes the CPU usage to go down. If it's still high in the absence of any processing, then something strange is going on.
I have a video editing task that needs to be completed occasionally. The task is relatively intensive and therefore needs a powerful machine to do it. It can take up to about 10 minutes to complete. I might get 10-20 such requests per day, though that will increase in the future.
I have created a docker container that currently is a consumer that pulls jobs from PubSub. I was thinking to have an instance of this container on Google Container Engine. However, as I understand it, I would need to have at least one instance of this (large / powerful / expensive) container running at all times, even if the majority of time it is sat idle. Therefore my cost for running this service would be overly high until my usage increased.
Is there an alternative way of running my container (GCP or otherwise) where I push a job to some service, which then starts an instance of a powerful machine, processes the job, then shuts down? Therefore I am paying for my CPU hours used.
Have a look at the cluster autoscaler: https://cloud.google.com/container-engine/docs/cluster-autoscaler