I am trying to run the Databricks - Forecasting using Time Series Analysis" in local environment, It is basically looking for the "/citibike/timeseries/{0}'.format(station_id)". Can someone suggest from where I can download "citibike/timeseries/{0}'.format(station_id)" or mount to workspace
Thanks in advance for the support
Ramabadran
Related
[google-cloud-storage]I am trying to copy files from Linux directory to GCP bucket using "Transfer for on-premises" option. I’ve installed docker script on Linux and GCP bucket is created. I now need to run Docker Run command to copy files. My question is how do I specify the source & target places in the docker command. For example;
Sudo docker run –source –target --hostname=$(hostname) --agent-id-prefix=ID123456789
The short answer is you can't supply a source/destination to this command, because its purpose is not to transfer the data. This command starts the agents for the service - agents are always-running processes that help you move data.
After starting agents that have access to your files, you issue a copy command in the Cloud Console, where you can specify a source directory and target bucket+prefix. When you do this, the service will contact the agents and use them to push the data to Google Cloud in parallel, for faster transfers. See the following links for more details:
Overview of how Transfer Service for on-premises data works
Setting up the service, and how to submit a transfer job
I have Websphere Application Server 8.5.5.14 hosting my ERP. I want to dockerize the application and deploy it into Kubernetes cluster. Can anyone provide me information on how to create image out of my existing WAS 8.5.5.14.
In theory you could do this by creating a tar ball of the filesystem and importing it into docker to make an image via something like:
cat WAS.tar | docker import - appImage
but there's going to be a number of issues you'll need to avoid, for example, if you have resources (jdbc drivers,resource adapters, etc), the tarball will need to have all of those included. You'll also need to expose all of the required ports for your app and its administration. A better way and best practice to solve this would be to start with an IBM supported image of traditional WAS and build your system atop it.
There are detailed instructions to do this at https://github.com/WASdev/ci.docker.websphere-traditional#docker-hub-image
F Rowe's answer is good; if you follow their advice of using the official images you will be using WebSphere v9.0 in the container. You can use this tool that can help figure out if there are any changes you need to make to your application in order to get it working in the container. It also generates some of the wsadmin scripts to configure the server in the image.
When ML model gets trained, it should be moved automatically from on-premise to Azure storage.
How can I automate On-Premise ML trained model to be stored in Azure storage account, goal here is when the model gets trained it should automatically be stored inside a storage account containers.
There are several solutions can help copying the trained model files from on-premise to Azure Storage.
To use azcopy sync command to replicate the source location to the destination location. Evan if your on-premise OS is Linux, you can try to run it via crontab with an interval.
To use Azure/azure-storage-fuse to mount a container of Azure Blob Storage to Linux filesystem, then directly save the trained model files to the mounted path, if the on-premise trainning machine is Linux.
To use an Azure File Share with Windows or Linux or macOS via Samba 3.0 as a directory in your on-premise filesystem, then you can save the trained model files into it.
At the end of Python trainning script, to add some code using Azure Storage SDK for Python to directly upload the trained module files to Azure Storage.
Hope it helps.
I work on VM on google cloud for my Machine learning work.
In order to avoid installing all the libraries and module from scratch every time I create a new VM on GCP or whatever, I want to save the VM that I created on Google Cloud and save on GitHub as a docker image. So that next time, I would just load it and run it as a docker image and get my VM ready for work.
Is this a straightforward task? Any ideas on how to do that, please?
When you create a Compute Engine instance, it is built from an artifact called an "image". Google provides some OS images from which you can build. If you then modify these images by (for example) installing packages or performing other configuration, you can then create a new custom image based upon your current VM state.
The recipe for this task is fully documented within the Compute Engine documentation here:
https://cloud.google.com/compute/docs/images/create-delete-deprecate-private-images
Once you have created a custom image, you can instantiate new VM instances from these custom images.
We intend to deploy a trained model in production. since we can not keep the same in the code base, we need to upload into the cloud and refer it on runtime.
We are using kubernetes, and I'm relatively new to it. Below is my stepwise understanding on how to solve this.
build a persistent volume with my trained model (size around 30MB)
mount the persistent volume into pod with a single container.
keep this pod running. refer to the model from a python script via pod.
I tried referring documentation pv with no luck. I also tried to move the model to PV via "kubectl cp", with no success.
Any idea on how to resolve this? any helps would be appreciated.