I recently learned to deploy Dask on AKS using helm (for reference my notes are here).
I was able to run code in Jupyter Lab but I couldn't pin the scheduler next to the notebook to see the Dask Dashboard. I'm hoping to make it look cool as in here. Although, I was able to access the dashboard as a different IP address given by the EXTERNAL-IP of the scheduler.
Is there something I am missing for how to get the scheduler to show up in a notebook? I clicked on the dask extension tab and tried to copy in the URL with little success.
When testing locally I was able to get find the dashboard just by clicking the on the magnifying glass (Auto-detect dashboard URL) and it found http://127.0.0.1:8787/
Do I need to get the scheduler on the same IP address as the jupyter notebook?
If you are able to successfully navigate to the dashboard in a separate tab then copy that same address into the text field in the dask labextension and things should be ok.
Related
Is it possible to somehow serialize current Thingsboard (let's call it TBoard) configuration, save it and than latter load saved configuration on TBoard startup.
I am specifically interested in loading device profiles, rule chains, and dashboards.
I want to save configuration together with my project in git repository so than latter I could just use docker-compose to start multiple services from project (let's call them sensors) and single TBoard instance with saved configuration which will be used for collecting telemetry from sensors and drawing dashboards.
Another reason for saving configuration is what happens if for some reason TBoard container crashes or somehow get corrupted so it can't be started again, would I have to click on the things again in order to create all device profiles, dashboards, configure rule chains ... etc etc ... ?
Regarding this line
I am specifically interested in loading device profiles, rule chains, and dashboards. I want to save configuration together with my project in git repository
I have just recently implemented version control for my Thingsboard deployment. The way i am doing it is with the python REST client.
I have written functions to export all dashboards/data converters/integrations/rule chains/widgets into json files which I save into a github repository.
I have also written the reverse script to push the stored files to a fresh environment, essentially "flashing" it. Surprisingly, this works perfectly.
I have an idea to publish this as a package, but it's something I've never done before so I'm unsure if I will get to it.
Just letting you know that it is definitely possible to get source control operational via the API.
I'm trying to use a Google dataflow template to export data from Bigtable to Google Cloud Storage (GCS). I'm following the gcloud command details here. However, when running I get a warning and associated error where the suggested fix is to add workers (--numWorkers), increase the attached disk size (--diskSizeGb). However, I see no way to execute the Google provided template while passing those parameters. Amy I missing something?
Reviewing a separate question, it seems like there is a way to do this. Can someone explain how?
parameters like numWorkers and diskSizeGb are Dataflow wide pipeline options. You should be able to specify them like so
gcloud dataflow jobs run JOB_NAME \
--gcs-location LOCATION --num-workers=$NUM_WORKERS --diskSizeGb=$DISK_SIZE
Let me know if you have furthr questions
I am referring to this page:
https://www.instana.com/docs/setup_and_manage/host_agent/updates/#update-interval
Is there a way to pass mode and time from outside as environment variables or any other way beside logging into the pod and manually changing the files inside etc/instana/com.instana.agent.main.config.UpdateManager.cfg file?
To whoever removed his/her answer: It was a correct answer. I don't know why you deleted it. Anyhow, I am posting again in case someone stumbles here.
You can control frequency and time by using INSTANA_AGENT_UPDATES_FREQUENCY and INSTANA_AGENT_UPDATES_TIME environment variables.
Updating mode via env variable is still unknown at this point.
Look at this page for more info: https://www.instana.com/docs/setup_and_manage/host_agent/on/docker/#updates-and-version-pinning
Most agent settings that one may want to change quickly are available as environment variables, see https://www.instana.com/docs/setup_and_manage/host_agent/on/docker. For example, setting the mode via environment variable is supported as well with INSTANA_AGENT_MODE, see e.g., https://hub.docker.com/r/instana/agent. The valid values are:
APM: the default, the agent monitors everything
INFRASTRUCTURE: the agent will collect metrics and entities but not traces
OFF: agent runs but collects no telemetry
AWS: agent will collect data about AWS managed services in a region and an account, supported on EC2 and Fargate, and with some extra configurations, on hosts outside AWS
On Kubernetes, it is also of course possible to use a ConfigMap to override files in the agent container.
This is a broad question, so any answers are deeply appreciated. I need to continually log the size of several build files (in this case some CSS and JS files), preserve this log and ideally show it as a dashboard in Jenkins.
I know that I can setup a cron job and execute a bash script to grab the files and log their size, but I'm not sure where this file would live and how to display it. Ideally the result would be a dashboard plot or bar graph over time.
Thanks.
P.S. I'm open to other logging suggestions, but Jenkins seems like the appropriate system to do this in.
Update: this isn't perfect but it works. Google Spreadsheets has a simple API for posting data, so this can work as an endpoint for any script you want to write that logs your data.
It's not a Jenkins solution, but gets the job done.
In my search leading up to this, I did come across JMeter, and the Performance Plugin for Jenkins, which were contenders for a possible solution.
First of all my apologies that the question is under stackoverflow not stack exchange, I don't have enough points to ask it there.
I've created a packer template in which creates my image(the image includes the code for my application, nginx, php-fpm and ...)
If you have used packer before, you will know that at the end of the process it will give you the image_id, I need to use this image id in order to update the template for my cloudformation on aws,
the cloud formation template will create an launch configuration based on the image_id from the packer. later on the launch configuration will be used to create an autoscaling group,which is connected to an ELB(The ELB is not under cloudformation).
Here are my questions:
1-whats the best way to automate the process of getting the id from packer and updating the cloudformation template?(To elaborate more, i need to get the id somehow, for now the only thing that I can think of is a bash command, but this cause an issue if I want to use jenkins later on.what are other alternatives?)
2-Lets say I managed to get the id, now whats the best policy to update the cloudformation template?(Currently aws CLI is my only option any better solution)?
2-How to automate these whole process using jenkins?
I would put a wrapper Python/Ruby script that would run packer, then call cloudformation reading from the packer output.