Adding Basic Monitoring Package to Virtual Guest via API - monitoring

Is it possible to add a monitoring package through the Softlayer API. On the portal, I can go into the Monitoring section and Order a "Monitoring Package - Basic", which will associate it with that Virtual Guest.
Is it possible to do this either during the placeOrder call or after the initial placeOrder call (i.e if the customer wants to add Basic Monitoring after the server is provisioned).
I tried to look into examples but they all assumed that there was a monitoring agent available, but it wasnt in my case. I also looked into Going Further with Softlayer part 3 but not sure how to extract the Basic Monitoring package from Product_Package Service.
Im using Python to do this, so any pointers in associating a Monitoring service during creation or after-creation would be very helpful.
Thanks in Advance!

try this:
"""
Order a Monitoring Package
Build a SoftLayer_Container_Product_Order_Monitoring_Package object for a new
monitoring order and pass it to the SoftLayer_Product_Order API service to order it
In this care we'll order a Basic (Hardware and OS) package with Basic Monitoring Package - Linux
configuration for more details see below
Important manual pages:
https://sldn.softlayer.com/reference/datatypes/SoftLayer_Container_Product_Order_Monitoring_Package
http://sldn.softlayer.com/reference/datatypes/SoftLayer_Product_Item_Price
http://sldn.softlayer.com/reference/services/SoftLayer_Product_Order/verifyOrder
http://sldn.softlayer.com/reference/services/SoftLayer_Product_Order/placeOrder
http://sldn.softlayer.com/reference/datatypes/SoftLayer_Monitoring_Agent_Configuration_Template_Group
License: http://sldn.softlayer.com/article/License
Author: SoftLayer Technologies, Inc. <sldn#softlayer.com>
"""
import SoftLayer
USERNAME = 'set me'
API_KEY = 'set me'
"""
Build a skeleton SoftLayer_Container_Product_Order_Monitoring_Package object
containing the order you wish to place.
"""
oderTemplate = {
'complexType': 'SoftLayer_Container_Product_Order_Monitoring_Package',
'packageId': 0, # the packageID for order monitoring packages is 0
'prices': [
{'id': 2302} # this is the price for Monitoring Package - Basic ((Hardware and OS))
],
'quantity': 0, # the quantity for order a service (in this case monitoring package) must be 0
'sendQuoteEmailFlag': True,
'useHourlyPricing': True,
'virtualGuests': [
{'id': 4906034} # the virtual guest ID where you want add the monitoring package
],
'configurationTemplateGroups': [
{'id': 3} # the templateID for the monitoring group (in this case Basic Monitoring package for Unix/Linux operating system.)
]
}
# Declare the API client to use the SoftLayer_Product_Order API service
client = SoftLayer.Client(username=USERNAME, api_key=API_KEY)
productOrderService = client['SoftLayer_Product_Order']
"""
verifyOrder() will check your order for errors. Replace this with a call to
placeOrder() when you're ready to order. Both calls return a receipt object
that you can use for your records.
Once your order is placed it'll go through SoftLayer's provisioning process.
"""
try:
order = productOrderService.verifyOrder(oderTemplate)
print(order)
except SoftLayer.SoftLayerAPIError as e:
print("Unable to verify the order! faultCode=%s, faultString=%s"
% (e.faultCode, e.faultString))
exit(1)
this is an example to create an network monitoring
"""
Create network monitoring
The script creates a monitoring network with Service ping
in a determinate IP address
Important manual pages
http://sldn.softlayer.com/reference/services/SoftLayer_Network_Monitor_Version1_Query_Host
http://sldn.softlayer.com/reference/datatypes/SoftLayer_Network_Monitor_Version1_Query_Host
License: http://sldn.softlayer.com/article/License
Author: SoftLayer Technologies, Inc. <sldn#softlayer.com>
"""
import SoftLayer.API
from pprint import pprint as pp
# Your SoftLayer API username and key.
USERNAME = 'set me'
API_KEY = 'set me'
# The ID of the server you wish to monitor
serverId = 7698842
"""
ID of the query type which can be found with SoftLayer_Network_Monitor_Version1_Query_Host_Stratum/getAllQueryTypes.
This example uses SERVICE PING: Test ping to address, will not fail on slow server response due to high latency or
high server load
"""
queryTypeId = 1
# IP address on the previously defined server to monitor
ipAddress = '10.104.50.118'
# Declare the API client
client = SoftLayer.Client(username=USERNAME, api_key=API_KEY)
networkMonitorVersion = client['SoftLayer_Network_Monitor_Version1_Query_Host']
# Define the SoftLayer_Network_Monitor_Version1_Query_Host templateObject.
newMonitor = {
'guestId': serverId,
'queryTypeId': queryTypeId,
'ipAddress': ipAddress
}
# Send the request for object creation and display the return value
try:
result = networkMonitorVersion.createObject(newMonitor)
pp(result)
except SoftLayer.SoftLayerAPIError as e:
print("Unable to create new network monitoring "
% (e.faultCode, e.faultString))
exit(1)
Regards

Related

write_points() Python not writing data for InfluxDB

I'm doing the basic setup in python to pass data to an InfluxDB server I have running on a RaspberryPi. My issue is the write_points() function does not write ANY data to InfluxDB even though I am using the simplest possible measurement and field-set entry as a test:
from influxdb import InfluxDBClient
from influxdb_config import HOST, PORT, USERNAME, PASSWORD, DATABASE
from data_poll import quotes_response
import pprint
influxdbClient = InfluxDBClient(
host = HOST,
port = PORT,
username = USERNAME,
password = PASSWORD,
database = 'example'
)
data = [
{
"measurement": "stock price",
"fields": {
"price": 0.64,
"volume": 120
}
}
]
pprint.pprint(influxdbClient.ping())
pprint.pprint(influxdbClient.get_list_database())
influxdbClient.switch_database('example')
pprint.pprint(influxdbClient.write_points(data))
pprint.pprint(influxdbClient.query('SELECT * FROM example'))
I am able to communicate with the server via Python and, if I create values manually on the server, retrieve them in the same script. Below is a snippet of the terminal output that matches some of the requests in the above code snippet.
'1.8.4'
[{'name': '_internal'}, {'name': 'jsonAAPLDataTest'}, {'name': 'example'}]
True
ResultSet({})
Update 2021/03/14 - I'm currently using Python 3.9.2, but had the exact same issue utilizing 3.7.3 (tested by the API developers). My next attempt is to downgrade my InfluxDB instance from v1.8.4 to v1.7.4 to see if this, by chance, resolves the issue.
I was able to now right data to my InfluxDB v1.8.4 database using the proper API github.com/influxdata/influxdb-client-python. Prior to this I was utilizing the prior release API which apparentely must have a difference in the underlying functionality of writing to the database. FIgured I would at least follow up and share the information so others would know in case they encoutner this issue.

Failing to create services on Google Cloud Run with API using Java SDK

I create a Cloud Run client, however, couldn't find a way to list a service that is deployed with Cloud Run on GKE (for Anthos).
Create the client:
HttpTransport httpTransport = new NetHttpTransport();
JsonFactory jsonFactory = new JacksonFactory();
GoogleCredentials credential = GoogleCredentials.getApplicationDefault();
credential.createScoped("https://www.googleapis.com/auth/cloud-platform");
HttpRequestInitializer requestInitializer = new HttpCredentialsAdapter(credential);
CloudRun.Builder builder = new CloudRun.Builder(httpTransport, jsonFactory, requestInitializer);
return builder.setApplicationName(applicationName)
.setRootUrl(cloudRunRootUrl)
.build();
} catch (IOException e) {
e.printStackTrace();
}
try to list services:
services = cloudRun.namespaces().services()
.list("namespaces/default")
.execute()
.getItems();
My "hello" service is deploy on a GKE cluster under the namespace default. The above code doesn't work because the client always see "default" as project_id and complains about permission stuff. If I put the project_id rather than "default", permission errors are gone, but no services will be found.
I tried another project that does have Google fully-managed cloud run services, the same code returns result (with .list("namespaces/")).
How to access the service on GKE?
And my next question would be, how to programmatically create Cloud Run services on GKE?
Edit - for creating a service
As I couldn't figure out how to interact with Cloud Run on GKE, I took a step back to try fully managed one. The following code to create a service fails, and the error message just doesn't provide much useful insight, how to make it work?
Service deployedService = null;
// Map<String,String> annotations = new HashMap<>();
// annotations.put("client.knative.dev/user-image","gcr.io/cloudrun/hello");
ServiceSpec spec = new ServiceSpec();
List<Container> containers = new ArrayList<>();
containers.add(new Container().setImage("gcr.io/cloudrun/hello"));
spec.setTemplate(new RevisionTemplate().setMetadata(new ObjectMeta().setName("hello-fully-managed-v0.1.0"))
.setSpec(new RevisionSpec().setContainerConcurrency(20)
.setContainers(containers)
.setTimeoutSeconds(100)
)
);
helloService.setApiVersion("serving.knative.dev/v1")
.setMetadata(new ObjectMeta().setName("hello-fully-managed")
.setNamespace("data-infrastructure-test-env")
// .setAnnotations(annotations)
)
.setSpec(spec)
.setKind("Service");
try {
deployedService = cloudRun.namespaces().services()
.create("namespaces/data-infrastructure-test-env",helloService)
.execute();
} catch (IOException e) {
e.printStackTrace();
response.add(e.toString());
return ResponseEntity.status(HttpStatus.INTERNAL_SERVER_ERROR).body(response);
}
Error message I got:
com.google.api.client.googleapis.json.GoogleJsonResponseException: 400 Bad Request
{
"code" : 400,
"errors" : [ {
"domain" : "global",
"message" : "The request has errors",
"reason" : "badRequest"
} ],
"message" : "The request has errors",
"status" : "INVALID_ARGUMENT"
}
at com.google.api.client.googleapis.json.GoogleJsonResponseException.from(GoogleJsonResponseException.java:150)
at com.google.api.client.googleapis.services.json.AbstractGoogleJsonClientRequest.newExceptionOnError(AbstractGoogleJsonClientRequest.java:113)
And the base_url is: https://europe-west1-run.googleapis.com
Your question is quite detailed (and is about Java which I am no expert in) and there are actually too many questions in there (ideally, please ask only 1 question here). However, I'll try to answer a few things you asked:
First, Cloud Run (managed, and on GKE) both implement the Knative Serving API. I've explained this at https://ahmet.im/blog/cloud-run-is-a-knative/ In fact, Cloud Run on GKE is just the open source Knative components installed to your cluster.
And my next question would be, how to programmatically create Cloud Run services on GKE?
You will have a very hard time (if possible at all) using the Cloud Run API client libraries (e.g. new CloudRun above) because these are designed for *.googleapis.com endpoints.
The Knative API part of "Cloud Run on GKE" is actually just your Kubernetes (GKE) master API endpoint (which runs on an IP address, with a TLS certificate that isn't trusted by root CAs, but you can find the CA cert in GKE GetCluster API call to verify the cert.) The TLS is part is why it's so hard to use the API Client libraries.
Knative APIs are just Kubernetes objects. So your best bet is one of these:
See Kubernetes java client (https://github.com/kubernetes-client/java) actually allows dynamic objects. (Go implementation does) and try to use that to create Knative CRDs.
Use kubectl apply.
Ask Knative Serving open source repository for help (they should be providing client libraries, maybe they're already there I'm not sure)
To program Cloud Run (managed) with the API Client Libraries, you need to explicitly override the API endpoint to the region e.g. us-central1-run.googleapis.com. (This is documented on each API call's REST API reference documentation.)
I have written a blog post in detail (with sample code in Go) on how to create/update services on Cloud Run (managed) using the Knative Serving API here: https://ahmet.im/blog/gcloud-run-deploy/
If you want to see how gcloud run deploy works, and which APIs it calls, you can pass --log-http option to observe the request/response traffic.
As for the error you got, it seems like the error message isn't helpful, but it might be coming from anywhere (as you're trying to imitate Knative API in GCP client libraries). I recommend reading my blog posts and sample code in depth.
UPDATES: Our engineering team's looking at the issue, it appears that there's currently a bug not adding the "details" field to the error. That's being worked on.
In your case, we see the following errors from requests:
field: "spec.template.spec"
description: "Missing template spec."
Means you are not properly filling up the spec field as I shown in my blog post and sample code.
field: "metadata.name"
description: "The revision name must be prefixed by the name of the enclosing Service or Configuration with a trailing -"
Make sure the name you are specifying adheres the patterns specified in API docs. Try to create that name manually perhaps in the UI or gcloud CLI.
field: "api_version"
description: "Unsupported API version \'serving.knative.dev/v1\'. Expected \'serving.knative.dev/v1alpha1\'"
Do not use v1alpha1 API, use v1 directly.
We'll try to get the details to the error message, however it appears that you need to study the sample code I linked in my blog post more in detail:
https://github.com/GoogleCloudPlatform/cloud-run-button/blob/a52c7fbaae33a3e06c112206c7227a0ef9649647/cmd/cloudshell_open/deploy.go#L26-L112
The Java SDK is automatically generated from the fact that the Cloud Run (fully managed) API is public. It does not support Cloud Run for Anthos.
(gcloud.run.deploy) The revision name must be prefixed by the name of the enclosing Service or Configuration with a trailing -revision name
revision name name should be 65 character then problem will be resolved in Automation pipeline with GCP revision suffix should be less revision name is the combination of (service name +revision suffix) will automatically created by GCP.

Cannot Read Bigquery table sourced from Google Sheet (Oath / Scope Error)

import pandas as pd
from google.cloud import bigquery
import google.auth
# from google.cloud import bigquery
# Create credentials with Drive & BigQuery API scopes
# Both APIs must be enabled for your project before running this code
credentials, project = google.auth.default(scopes=[
'https://www.googleapis.com/auth/drive',
'https://www.googleapis.com/auth/spreadsheets',
'https://www.googleapis.com/auth/bigquery',
])
client = bigquery.Client(credentials=credentials, project=project)
# Configure the external data source and query job
external_config = bigquery.ExternalConfig('GOOGLE_SHEETS')
# Use a shareable link or grant viewing access to the email address you
# used to authenticate with BigQuery (this example Sheet is public)
sheet_url = (
'https://docs.google.com/spreadsheets'
'/d/1uknEkew2C3nh1JQgrNKjj3Lc45hvYI2EjVCcFRligl4/edit?usp=sharing')
external_config.source_uris = [sheet_url]
external_config.schema = [
bigquery.SchemaField('name', 'STRING'),
bigquery.SchemaField('post_abbr', 'STRING')
]
external_config.options.skip_leading_rows = 1 # optionally skip header row
table_id = 'BambooHRActiveRoster'
job_config = bigquery.QueryJobConfig()
job_config.table_definitions = {table_id: external_config}
# Get Top 10
sql = 'SELECT * FROM workforce.BambooHRActiveRoster LIMIT 10'
query_job = client.query(sql, job_config=job_config) # API request
top10 = list(query_job) # Waits for query to finish
print('There are {} states with names starting with W.'.format(
len(top10)))
The error I get is:
BadRequest: 400 Error while reading table: workforce.BambooHRActiveRoster, error message: Failed to read the spreadsheet. Errors: No OAuth token with Google Drive scope was found.
I can pull data in from a BigQuery table created from CSV upload, but when I have a BigQuery table created from a linked Google Sheet, I continue to receive this error.
I have tried to replicate the sample in Google's documentation (Creating and querying a temporary table):
https://cloud.google.com/bigquery/external-data-drive
You are authenticating as yourself, which is generally fine for BQ if you have the correct permissions. Using tables linked to Google Sheets often requires a service account. Create one (or have your BI/IT team create one), and then you will have to share the underlying Google Sheet with the service account. Finally, you will need to modify your python script to use the service account credentials and not your own.
The quick way around this is to use the BQ interface, select * from the Sheets-linked table, and save the results to a new table, and query that new table directly in your python script. This works well if this is a one-time upload/analysis. If the data in the sheets will be changing consistently and you will need to routinely query the data, this is not a long-term solution.
I solved problem by adding scope object to client.
from google.cloud import bigquery
import google.auth
credentials, project = google.auth.default(scopes=[
'https://www.googleapis.com/auth/drive',
'https://www.googleapis.com/auth/bigquery',
])
CLIENT = bigquery.Client(project='project', credentials=credentials)
https://cloud.google.com/bigquery/external-data-drive
import pandas as pd
from google.oauth2 import service_account
from google.cloud import bigquery
#from oauth2client.service_account import ServiceAccountCredentials
SCOPES = ['https://www.googleapis.com/auth/drive','https://www.googleapis.com/auth/bigquery']
SERVICE_ACCOUNT_FILE = 'mykey.json'
credentials = service_account.Credentials.from_service_account_file(
SERVICE_ACCOUNT_FILE, scopes=SCOPES)
delegated_credentials = credentials.with_subject('myserviceaccountt#domain.iam.gserviceaccount.com')
client = bigquery.Client(credentials=delegated_credentials, project=project)
sql = 'SELECT * FROM `myModel`'
DF = client.query(sql).to_dataframe()
You can try to update your default credentials through the console:
gcloud auth application-default login --scopes=https://www.googleapis.com/auth/userinfo.email,https://www.googleapis.com/auth/drive,https://www.googleapis.com/auth/cloud-platform

Is it possible to create product keys for my electron application?

I want to build a desktop application and be able to publish product keys or serial numbers.Before the user can use the application he will be requested to enter the product key/serial number.
Similar to Microsoft Office when they provide keys like XXXX-XXXX-XXXX-XXXX
The idea I have is to sell the app based on licenses and providing product key for every device seems more professional than accounts (usernames and passwords).
so my questions are:
1) Is it possible to accomplish this with electron?
2) Can you advice me wether I should go for serial numbers (if it is doable) or accounts? or are there better options?
3) if you answered the second question. Please state why?
Edit for 2021: I'd like to revise this answer, as it has generated a lot of inquiries on the comparison I made between license keys and user accounts. I previously would almost always recommended utilizing user accounts for licensing Electron apps, but I've since changed my position to be a little more nuanced. For most Electron apps, license keys will do just fine.
Adding license key (synonymous with product key) validation to an Electron app can be pretty straight forward. First, you would want to somehow generate a license key for each user. This can be done using cryptography, or it can be done by generating a 'random' license key string and storing it in a database and then building a CRUD licensing server that can verify that a given license key is "valid."
For cryptographic license keys, you can take some information from the customer, e.g. their order number or an email address, and create a 'signature' of it using RSA cryptography. Using Node, that would look something like this:
const crypto = require('crypto')
// Generate a new keypair
const { privateKey, publicKey } = crypto.generateKeyPairSync('rsa', {
// Using a larger key size, such as 2048, would be more secure
// but will result in longer signatures.
modulusLength: 512,
privateKeyEncoding: { type: 'pkcs1', format: 'pem' },
publicKeyEncoding: { type: 'pkcs1', format: 'pem' },
})
// Some data we're going to use to generate a license key from
const data = 'user#store.example'
// Create a RSA signer
const signer = crypto.createSign('rsa-sha256')
signer.update(data)
// Encode the original data
const encoded = Buffer.from(data).toString('base64')
// Generate a signature for the data
const signature = signer.sign(privateKey, 'hex')
// Combine the encoded data and signature to create a license key
const licenseKey = `${encoded}.${signature}`
console.log({ privateKey, publicKey, licenseKey })
Then, to validate the license key within your Electron app, you would want to cryptographically 'verify' the key's authenticity by embedding the public (not the private!) key generated above into your application code base:
// Split the license key's data and the signature
const [encoded, signature] = licenseKey.split('.')
const data = Buffer.from(encoded, 'base64').toString()
// Create an RSA verifier
const verifier = crypto.createVerify('rsa-sha256')
verifier.update(data)
// Verify the signature for the data using the public key
const valid = verifier.verify(publicKey, signature, 'hex')
console.log({ valid, data })
Generating and verifying the authenticity of cryptographically signed license keys like this will work great for a lot of simple licensing needs. They're relatively simple, and they work great offline, but sometimes verifying that a license key is 'valid' isn't enough. Sometimes requirements dictate that license keys are not perpetual (i.e. 'valid' forever), or they call for more complicated licensing systems, such as one where only a limited number of devices (or seats) can use the app at one time. Or perhaps the license key needs a renewable expiration date. That's where a license server can come in.
A license server can help manage a license's activation, expirations, among other things, such as user accounts used to associate multiple licenses or feature-licenses with a single user or team. I don't recommend user accounts unless you have a specific need for them, e.g. you need additional user profile information, or you need to associate multiple licenses with a single user.
But in case you aren't particularly keen on writing and maintaining your own in-house licensing system, or you just don't want to deal with writing your own license key generator like the one above, I’m the founder of a software licensing API called Keygen which can help you get up and running quickly without having to write and host your own license server. :)
Keygen is a typical HTTP JSON API service (i.e. there’s no software that you need to package with your app). It can be used in any programming language and with frameworks like Electron.
In its simplest form, validating a license key with Keygen is as easy as hitting a single JSON API endpoint (feel free to run this in a terminal):
curl -X POST https://api.keygen.sh/v1/accounts/demo/licenses/actions/validate-key \
-d '{
"meta": {
"key": "C1B6DE-39A6E3-DE1529-8559A0-4AF593-V3"
}
}'
I recently put together an example of adding license key validation, as well as device activation and management, to an Electron app. You can check out that repo on GitHub: https://github.com/keygen-sh/example-electron-license-activation.
I hope that answers your question and gives you a few insights. Happy to answer any other questions you have, as I've implemented licensing a few times now for Electron apps. :)
YES but concerning the software registration mechanism, IT IS HARD and it needs a lot of testing too.
If 90% of your users have internet access you should definitely go with the user accounts or some OAUTH 2.0 Plug and play thing (login with facebook/gmail/whatever)
I built a software licensing architecture from scratch using crypto and the fs module , and it was quite a journey (year) !
Making a good registration mechanism for your software from scratch is not recommended , electron makes it harder because the source code is relatively exposed.
That being said , if you really wanna go that way , bcrypt is good at this (hashs), you need a unique user identifier to hash , you also need some kind of persistence (preferably a file ) where you can store the user license , and you need to hide the salt that you are using for hashing either by hashing the hash... or storing small bits of it in separate files.
this will make a good starting point for licensing but it's far from being fully secured.
Hope it helps !
There are many services out there which help you add license key based software licensing to your app. And to ensure your customers don't reuse the key, you would need a strong device fingerprinting algorithm.
You can try out Cryptlex. It offers a very robust licensing solution with advanced device fingerprinting algo. You can check out the Node.js example on Github to add licensing to your electron app.
Yes, it is possible.
I myself desired this feature, and I found related solutions such as paid video tutorials, online solutions [with Keygen], and other random hacks, but I wanted something that is offline and free, so I created my own repository for myself/others to use. Here's how it works.
Overview
Install secure-electron-license-keys-cli. (ie. npm i -g secure-electron-license-keys-cli).
Create a license key by running secure-electron-license-keys-cli. This generates a public.key, private.key and license.data.
Keep private.key safe, but stick public.key and license.data in the root of your Electron app.
Install secure-electron-license-keys. (ie. npm i secure-electron-license-keys).
In your main.js file, review this sample code and add the necessary binding.
const {
app,
BrowserWindow,
ipcMain,
} = require("electron");
const SecureElectronLicenseKeys = require("secure-electron-license-keys");
const path = require("path");
const fs = require("fs");
const crypto = require("crypto");
// Keep a global reference of the window object, if you don't, the window will
// be closed automatically when the JavaScript object is garbage collected.
let win;
async function createWindow() {
// Create the browser window.
win = new BrowserWindow({
width: 800,
height: 600,
title: "App title",
webPreferences: {
preload: path.join(
__dirname,
"preload.js"
)
},
});
// Setup bindings for offline license verification
SecureElectronLicenseKeys.mainBindings(ipcMain, win, fs, crypto, {
root: process.cwd(),
version: app.getVersion(),
});
// Load app
win.loadURL("index.html");
// Emitted when the window is closed.
win.on("closed", () => {
// Dereference the window object, usually you would store windows
// in an array if your app supports multi windows, this is the time
// when you should delete the corresponding element.
win = null;
});
}
// This method will be called when Electron has finished
// initialization and is ready to create browser windows.
// Some APIs can only be used after this event occurs.
app.on("ready", createWindow);
// Quit when all windows are closed.
app.on("window-all-closed", () => {
// On macOS it is common for applications and their menu bar
// to stay active until the user quits explicitly with Cmd + Q
if (process.platform !== "darwin") {
app.quit();
} else {
SecureElectronLicenseKeys.clearMainBindings(ipcMain);
}
});
In your preload.js file, review the sample code and add the supporting code.
const {
contextBridge,
ipcRenderer
} = require("electron");
const SecureElectronLicenseKeys = require("secure-electron-license-keys");
// Expose protected methods that allow the renderer process to use
// the ipcRenderer without exposing the entire object
contextBridge.exposeInMainWorld("api", {
licenseKeys: SecureElectronLicenseKeys.preloadBindings(ipcRenderer)
});
Review the sample React component how you can verify the validity of your license, and act accordingly within your app.
import React from "react";
import {
validateLicenseRequest,
validateLicenseResponse,
} from "secure-electron-license-keys";
class Component extends React.Component {
constructor(props) {
super(props);
this.checkLicense = this.checkLicense.bind(this);
}
componentWillUnmount() {
window.api.licenseKeys.clearRendererBindings();
}
componentDidMount() {
// Set up binding to listen when the license key is
// validated by the main process
const _ = this;
window.api.licenseKeys.onReceive(validateLicenseResponse, function (data) {
console.log("License response:");
console.log(data);
});
}
// Fire event to check the validity of our license
checkLicense(event) {
window.api.licenseKeys.send(validateLicenseRequest);
}
render() {
return (
<div>
<button onClick={this.checkLicense}>Check license</button>
</div>
);
}
}
export default Component;
You are done!
Further detail
To explain further, the license is validated by a request from the client (ie. front-end) page. The client sends an IPC request to the main (ie. backend) process via this call (window.api.licenseKeys.send(validateLicenseRequest)).
Once this call is received by the backend process (which was hooked up because we set it up with this call (SecureElectronLicenseKeys.mainBindings)), the library code tries to decrypt license.data with public.key. Regardless if this succeeds or not, the success status is sent back to the client page (via IPC).
How to limit license keys by version
What I've explained is quite limited because it doesn't limit the versions of an app you might give to a particular user. secure-electron-license-keys-cli includes flags you may pass when generating the license key to set particular major/minor/patch/expire values for a license.
If you wanted to allow major versions up to 7, you could run the command to generate a license file like so:
secure-electron-license-keys-cli --major "7"
If you wanted to allow major versions up to 7 and expire on 2022-12-31, you could run the command to generate a license file like so:
secure-electron-license-keys-cli --major "7" --expire "2022-12-31"
If you do run these commands, you will need to update your client page in order to compare against them, ie:
window.api.licenseKeys.onReceive(validateLicenseResponse, function (data) {
// If the license key/data is valid
if (data.success) {
if (data.appVersion.major <= data.major &&
new Date() <= Date.parse(data.expire)) {
// User is able to use app
} else {
// License has expired
}
} else {
// License isn't valid
}
});
The repository page has more details of options but this should give you the gist of what you'll have to do.
Limitations
This isn't perfect, but will likely handle 90% of your users. This doesn't protect against:
Someone decompiling your app and making their own license to use/removing license code entirely
Someone copying a license and giving it to another person
There's also the concern how to run this library if you are packaging multiple or automated .exes, since these license files need to be included in the source. I'll leave that up to the creativity of you to figure out.
Extra resources / disclaimers
I built all of the secure-electron-* repositories mentioned in this question, and I also maintain secure-electron-template which has the setup for license keys already pre-baked into the solution if you need something turn-key.

Slack clean all messages (~8K) in a channel [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 months ago.
The community reviewed whether to reopen this question 7 months ago and left it closed:
Original close reason(s) were not resolved
Improve this question
We currently have a Slack channel with ~8K messages all comes from Jenkins integration. Is there any programmatic way to delete all messages from that channel? The web interface can only delete 100 messages at a time.
I quickly found out there's someone already made a helper: slack-cleaner for this.
And for me it's just:
slack-cleaner --token=<TOKEN> --message --channel jenkins --user "*" --perform
I wrote a simple node script for deleting messages from public/private channels and chats. You can modify and use it.
https://gist.github.com/firatkucuk/ee898bc919021da621689f5e47e7abac
First, modify your token in the scripts configuration section then run the script:
node ./delete-slack-messages CHANNEL_ID
Get an OAuth token:
Go to https://api.slack.com/apps
Click 'Create New App', and name your (temporary) app.
In the side nav, go to 'Oauth & Permissions'
On that page, find the 'Scopes' section. Click 'Add an OAuth Scope' and add 'channels:history' and 'chat:write'. (see scopes)
At the top of the page, Click 'Install App to Workspace'. Confirm, and on page reload, copy the OAuth Access Token.
Find the channel ID
Also, the channel ID can be seen in the browser URL when you open slack in the browser. e.g.
https://mycompany.slack.com/messages/MY_CHANNEL_ID/
or
https://app.slack.com/client/WORKSPACE_ID/MY_CHANNEL_ID
default clean command did not work for me giving following error:
$ slack-cleaner --token=<TOKEN> --message --channel <CHANNEL>
Running slack-cleaner v0.2.4
Channel, direct message or private group not found
but following worked without any issue to clean the bot messages
slack-cleaner --token <TOKEN> --message --group <CHANNEL> --bot --perform --rate 1
or
slack-cleaner --token <TOKEN> --message --group <CHANNEL> --user "*" --perform --rate 1
to clean all the messages.
I use rate-limit of 1 second to avoid HTTP 429 Too Many Requests error because of slack api rate limit. In both cases, channel name was supplied without # sign
For anyone else who doesn't need to do it programmatic,
here's a quick way:
(probably for paid users only)
Open the channel in web or the desktop app, and click the cog (top right).
Choose "Additional options..." to bring up the archival menu. notes
Select "Set the channel message retention policy".
Set "Retain all messages for a specific number of days".
All messages older than this time are deleted permanently!
I usually set this option to "1 day" to leave the channel with some context, then I go back into the above settings, and set it's retention policy back to "default" to go continue storing them from now-on.
Notes:
Luke points out: If the option is hidden: you have to go to global workspace Admin settings, Message Retention & Deletion, and check "Let workspace members override these settings"
!!UPDATE!!
as #niels-van-reijmersdal metioned in comment.
This feature has been removed. See this thread for more info: twitter.com/slackhq/status/467182697979588608?lang=en
!!END UPDATE!!
Here is a nice answer from SlackHQ in twitter, and it works without any third party stuff.
https://twitter.com/slackhq/status/467182697979588608?lang=en
You can bulk delete via the archives (http://my.slack.com/archives )
page for a particular channel: look for "delete messages" in menu
Option 1 You can set a Slack channel to automatically delete messages after 1 day, but it's a little hidden. First, you have to go to your Slack Workspace Settings, Message Retention & Deletion, and check "Let workspace members override these settings". After that, in the Slack client you can open a channel, click the gear, and click "Edit message retention..."
Option 2 The slack-cleaner command line tool that others have mentioned.
Option 3 Below is a little Python script that I use to clear Private channels. Can be a good starting point if you want more programmatic control of deletion. Unfortunately Slack has no bulk-delete API, and they rate-limit the individual delete to 50 per minute, so it unavoidably takes a long time.
# -*- coding: utf-8 -*-
"""
Requirement: pip install slackclient
"""
import multiprocessing.dummy, ctypes, time, traceback, datetime
from slackclient import SlackClient
legacy_token = raw_input("Enter token of an admin user. Get it from https://api.slack.com/custom-integrations/legacy-tokens >> ")
slack_client = SlackClient(legacy_token)
name_to_id = dict()
res = slack_client.api_call(
"groups.list", # groups are private channels, conversations are public channels. Different API.
exclude_members=True,
)
print ("Private channels:")
for c in res['groups']:
print(c['name'])
name_to_id[c['name']] = c['id']
channel = raw_input("Enter channel name to clear >> ").strip("#")
channel_id = name_to_id[channel]
pool=multiprocessing.dummy.Pool(4) #slack rate-limits the API, so not much benefit to more threads.
count = multiprocessing.dummy.Value(ctypes.c_int,0)
def _delete_message(message):
try:
success = False
while not success:
res= slack_client.api_call(
"chat.delete",
channel=channel_id,
ts=message['ts']
)
success = res['ok']
if not success:
if res.get('error')=='ratelimited':
# print res
time.sleep(float(res['headers']['Retry-After']))
else:
raise Exception("got error: %s"%(str(res.get('error'))))
count.value += 1
if count.value % 50==0:
print(count.value)
except:
traceback.print_exc()
retries = 3
hours_in_past = int(raw_input("How many hours in the past should messages be kept? Enter 0 to delete them all. >> "))
latest_timestamp = ((datetime.datetime.utcnow()-datetime.timedelta(hours=hours_in_past)) - datetime.datetime(1970,1,1)).total_seconds()
print("deleting messages...")
while retries > 0:
#see https://api.slack.com/methods/conversations.history
res = slack_client.api_call(
"groups.history",
channel=channel_id,
count=1000,
latest=latest_timestamp,)#important to do paging. Otherwise Slack returns a lot of already-deleted messages.
if res['messages']:
latest_timestamp = min(float(m['ts']) for m in res['messages'])
print datetime.datetime.utcfromtimestamp(float(latest_timestamp)).strftime("%r %d-%b-%Y")
pool.map(_delete_message, res['messages'])
if not res["has_more"]: #Slack API seems to lie about this sometimes
print ("No data. Sleeping...")
time.sleep(1.0)
retries -= 1
else:
retries=10
print("Done.")
Note, that script will need modification to list & clear public channels. The API methods for those are channels.* instead of groups.*
As other answers allude, Slack's rate limits make this tricky - the rate limit is relatively low for their chat.delete API at ~50 requests per minute.
The best strategy that respects the rate limit is to retrieve messages from the channel you want to clear, then delete the messages in batches under 50 that run on a minutely interval.
I've built a project containing an example of this batching that you can easily fork and deploy on Autocode - it lets you clear a channel via slash command (and allows you restrict access to the command to just certain users of course!). When you run /cmd clear in a channel, it marks that channel for clearing and runs the following code every minute until it deletes all the messages in the channel:
console.log(`About to clear ${messages.length} messages from #${channel.name}...`);
let deletionResults = await async.mapLimit(messages, 2, async (message) => {
try {
await lib.slack.messages['#0.6.1'].destroy({
id: clearedChannelId,
ts: message.ts,
as_user: true
});
return {
successful: true
};
} catch (e) {
return {
successful: false,
retryable: e.message && e.message.indexOf('ratelimited') !== -1
};
}
});
You can view the full code and a guide to deploying your own version here: https://autocode.com/src/jacoblee/slack-clear-messages/
Tip: if you gonna use the slack cleaner https://github.com/kfei/slack-cleaner
You will need to generate a token: https://api.slack.com/custom-integrations/legacy-tokens
If you like Python and have obtained a legacy API token from the slack api, you can delete all private messages you sent to a user with the following:
import requests
import sys
import time
from json import loads
# config - replace the bit between quotes with your "token"
token = 'xoxp-854385385283-5438342854238520-513620305190-505dbc3e1c83b6729e198b52f128ad69'
# replace 'Carl' with name of the person you were messaging
dm_name = 'Carl'
# helper methods
api = 'https://slack.com/api/'
suffix = 'token={0}&pretty=1'.format(token)
def fetch(route, args=''):
'''Make a GET request for data at `url` and return formatted JSON'''
url = api + route + '?' + suffix + '&' + args
return loads(requests.get(url).text)
# find the user whose dm messages should be removed
target_user = [i for i in fetch('users.list')['members'] if dm_name in i['real_name']]
if not target_user:
print(' ! your target user could not be found')
sys.exit()
# find the channel with messages to the target user
channel = [i for i in fetch('im.list')['ims'] if i['user'] == target_user[0]['id']]
if not channel:
print(' ! your target channel could not be found')
sys.exit()
# fetch and delete all messages
print(' * querying for channel', channel[0]['id'], 'with target user', target_user[0]['id'])
args = 'channel=' + channel[0]['id'] + '&limit=100'
result = fetch('conversations.history', args=args)
messages = result['messages']
print(' * has more:', result['has_more'], result.get('response_metadata', {}).get('next_cursor', ''))
while result['has_more']:
cursor = result['response_metadata']['next_cursor']
result = fetch('conversations.history', args=args + '&cursor=' + cursor)
messages += result['messages']
print(' * next page has more:', result['has_more'])
for idx, i in enumerate(messages):
# tier 3 method rate limit: https://api.slack.com/methods/chat.delete
# all rate limits: https://api.slack.com/docs/rate-limits#tiers
time.sleep(1.05)
result = fetch('chat.delete', args='channel={0}&ts={1}'.format(channel[0]['id'], i['ts']))
print(' * deleted', idx+1, 'of', len(messages), 'messages', i['text'])
if result.get('error', '') == 'ratelimited':
print('\n ! sorry there have been too many requests. Please wait a little bit and try again.')
sys.exit()
Here is a great chrome extension to bulk delete your slack channel/group/im messages - https://slackext.com/deleter , where you can filter the messages by star, time range, or users.
BTW, it also supports load all messages in recent version, then you can load your ~8k messages as you need.
There is a slack tool to delete all slack messages on your workspace. Check it out: https://www.messagebender.com

Resources