I am writing a Slack integration that can boot certain users out of public channels when certain conditions are met. I have added several OAuth scopes to the bot token, including the following:
channels:history
channels:manage
channels:read
chat:write
chat:write.public
groups:write
im:write
mpim:write
users:read
I am writing my bot in Python using the slack-bolt library and asyncio. However when I try to invoke this code:
await app.client.conversations_kick(channel=channel_id, user=user_id)
I get the following error:
slack_sdk.errors.SlackApiError: The request to the Slack API failed. (url: https://www.slack.com/api/conversations.kick)
The server responded with: {'ok': False, 'error': 'channel_not_found'}
I know for a fact that both the channel_id and user_id arguments I'm passing in are valid. The channel ID I'm using is the string C01PAE3DB0A. I know it is valid because I can use the very same value for channel_id in the following API call:
response = await app.client.conversations_info(channel=channel_id)
And when I call conversations_info like that I get all of the information about my channel. (The same is true for calling users_info with the user_id - it returns successfully.) So why is that when I pass my valid channel_id parameter to conversations_kick I consistently receive this channel_not_found error? What am I missing?
So I got in touch directly with Slack support about this and they confirmed that there is a bug on their end. Specifically, the bug is that I should have received a restricted_action error response instead of a channel_not_found response. Apparently this is a known issue that is on their backlog.
The reason the API call would (try to) return this restricted_action error is simply because there is a workspace setting that, by default, prevents non-admins from kicking people out of public channels. Furthermore, this setting can only be changed by the workspace owner - one tier above admins.
But assuming you are the owner of the Slack workspace, you simply have to log into the Settings & Permissions page, which should look something like this:
And then you have to change the setting labeled "People who can remove members from public channels" from "Workspace admins and owners only (default)" to "Everyone, except guests."
Once I made that change, my API calls started succeeding.
We're trying to include the IVR steps in our UI, but to get the steps I have to make several API calls. That's fine, except the only way I can seem to get the relevant info is to load all flow executions.
If I could pass the flow.sid via the HTTP Request widget then I could go fetch the info I need later instead of having to iterate through all the previous executions. I tried passing {{flow.data}} as the request body, thinking it was JSON, but it ends up being empty.
Here's a spike that someone wrote for us, modified to just work with a single execution.
require "httparty"
STUDIO_FLOW_SID = "FW***"
AUTH = {username: ENV["TWILIO_ACCOUNT_SID"], password: ENV["TWILIO_AUTH_TOKEN"]}
DATE_CREATED_FROM = "2019-09-01T000000Z"
DATE_CREATED_TO = "2019-10-01T000000Z"
# Retrieves all executions in the given date range
executions_url = "https://studio.twilio.com/v1/Flows/#{STUDIO_FLOW_SID}/Executions?DateCreatedFrom=#{DATE_CREATED_FROM}&DateCreatedTo=#{DATE_CREATED_TO}"
response = HTTParty.get(executions_url, basic_auth: AUTH)
# If I can get the individual execution from the IVR {{flow.data}}
# that would be ideal
execution = response.parsed_response["executions"].first
execution_context_url = execution["links"]["execution_context"]
execution_context = HTTParty.get(execution_context_url, basic_auth: AUTH)
# Or, if I could work backwards and get the execution context ID from
# the call somehow, that would work too.
call_sid = execution_context.parsed_response["context"]["trigger"]["call"]["CallSid"]
steps = HTTParty
.get(execution["links"]["steps"], basic_auth: AUTH)
.parsed_response["steps"]
.sort_by { |step| step["date_created"] }
.map { |step| step["transitioned_to"] }
.select { |step| step.include?("option") || step.include?("menu") }
puts [call_sid, steps].inspect
tl;dr -
I either need the Flow execution info passed in an HTTP Request widget or I need to work backwards from a CallSid to get the execution steps.
Twilio developer evangelist here.
The execution Sid can be accessed in the flow's data, under flow.sid.
This was missing in the documentation, but I've just added it here: https://www.twilio.com/docs/studio/user-guide#context-variables
Note: {{flow.sid}} doesn't currently appear in Studio's autocomplete, but it's there, I promise!
I found a way to get the execution from the call:
Get the list of incoming phone numbers: https://www.twilio.com/docs/phone-numbers/api/incomingphonenumber-resource#read-multiple-incomingphonenumber-resources
Search for the number using the destination of the call
Then you can get the flowid by parsing voice_url
With this flow_id and the date the call arrived, you can get the list of executions: https://www.twilio.com/docs/studio/rest-api/v2/execution#read-a-list-of-executions
If you have more than one, you can use the calling number to search in the list.
Anyway, if you are calling the function from Studio, philnash answer is better :)
I would like to get the number of pull requests and issues for a particularly GitHub rep. At the moment the method I'm using is really clumsy.
Using the octokit gem and the following code:
# Builds data that is sent to the API
def request_params
data = { }
# labels example: "bug,invalid,question"
data["labels"] = labels.present? ? labels : ""
# filter example: "assigned" "created" "mentioned" "subscribed" "all"
data["filter"] = filter
# state example: "open" "closed" "all"
data["state"] = state
return data
end
Octokit.auto_paginate = true
github = Octokit::Client.new(access_token: oauth_token)
github.list_issues("#{user}/#{repository}", request_params).count
The data received is extremely big, so its very ineficient in terms of memory. I don't need data regarding the issues only how many are there, X issues ( based on the filters / state / labels ).
I thought of a solution but was not able to implement it.
Basically: do 1 request to get the header, in the header there should be a link to the last page. Then make 1 more request to the last page, and check how many issues are there. Then we can calculate:
count = ( number of pages * (issues-per-page - 1) ) + issues-on-last-page
But I did not found out how to get request header information from octokit Authentificated Client.
If there is a simple way of doing it without octokit, I will happily use it.
Note: I want to fix this issue because the number of pull requests is quite high, and the code above generates R14 errors on Heroku.
Thank You!
I feel an easy way is to use the GitHub API and restrict the number of PRs you want displayed in a page by using the per_page filter. For example: to find out all the PRs of the repo OneGet/oneget you can use.. https://api.github.com/search/issues?q=repo:OneGet/oneget+type:pr&per_page=1. The JSON response has the field "total_count" which gives the count of the total number of PRs. And the response will be relatively light since it will have only one issue listed.
Ref: Search Issues
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 months ago.
The community reviewed whether to reopen this question 7 months ago and left it closed:
Original close reason(s) were not resolved
Improve this question
We currently have a Slack channel with ~8K messages all comes from Jenkins integration. Is there any programmatic way to delete all messages from that channel? The web interface can only delete 100 messages at a time.
I quickly found out there's someone already made a helper: slack-cleaner for this.
And for me it's just:
slack-cleaner --token=<TOKEN> --message --channel jenkins --user "*" --perform
I wrote a simple node script for deleting messages from public/private channels and chats. You can modify and use it.
https://gist.github.com/firatkucuk/ee898bc919021da621689f5e47e7abac
First, modify your token in the scripts configuration section then run the script:
node ./delete-slack-messages CHANNEL_ID
Get an OAuth token:
Go to https://api.slack.com/apps
Click 'Create New App', and name your (temporary) app.
In the side nav, go to 'Oauth & Permissions'
On that page, find the 'Scopes' section. Click 'Add an OAuth Scope' and add 'channels:history' and 'chat:write'. (see scopes)
At the top of the page, Click 'Install App to Workspace'. Confirm, and on page reload, copy the OAuth Access Token.
Find the channel ID
Also, the channel ID can be seen in the browser URL when you open slack in the browser. e.g.
https://mycompany.slack.com/messages/MY_CHANNEL_ID/
or
https://app.slack.com/client/WORKSPACE_ID/MY_CHANNEL_ID
default clean command did not work for me giving following error:
$ slack-cleaner --token=<TOKEN> --message --channel <CHANNEL>
Running slack-cleaner v0.2.4
Channel, direct message or private group not found
but following worked without any issue to clean the bot messages
slack-cleaner --token <TOKEN> --message --group <CHANNEL> --bot --perform --rate 1
or
slack-cleaner --token <TOKEN> --message --group <CHANNEL> --user "*" --perform --rate 1
to clean all the messages.
I use rate-limit of 1 second to avoid HTTP 429 Too Many Requests error because of slack api rate limit. In both cases, channel name was supplied without # sign
For anyone else who doesn't need to do it programmatic,
here's a quick way:
(probably for paid users only)
Open the channel in web or the desktop app, and click the cog (top right).
Choose "Additional options..." to bring up the archival menu. notes
Select "Set the channel message retention policy".
Set "Retain all messages for a specific number of days".
All messages older than this time are deleted permanently!
I usually set this option to "1 day" to leave the channel with some context, then I go back into the above settings, and set it's retention policy back to "default" to go continue storing them from now-on.
Notes:
Luke points out: If the option is hidden: you have to go to global workspace Admin settings, Message Retention & Deletion, and check "Let workspace members override these settings"
!!UPDATE!!
as #niels-van-reijmersdal metioned in comment.
This feature has been removed. See this thread for more info: twitter.com/slackhq/status/467182697979588608?lang=en
!!END UPDATE!!
Here is a nice answer from SlackHQ in twitter, and it works without any third party stuff.
https://twitter.com/slackhq/status/467182697979588608?lang=en
You can bulk delete via the archives (http://my.slack.com/archives )
page for a particular channel: look for "delete messages" in menu
Option 1 You can set a Slack channel to automatically delete messages after 1 day, but it's a little hidden. First, you have to go to your Slack Workspace Settings, Message Retention & Deletion, and check "Let workspace members override these settings". After that, in the Slack client you can open a channel, click the gear, and click "Edit message retention..."
Option 2 The slack-cleaner command line tool that others have mentioned.
Option 3 Below is a little Python script that I use to clear Private channels. Can be a good starting point if you want more programmatic control of deletion. Unfortunately Slack has no bulk-delete API, and they rate-limit the individual delete to 50 per minute, so it unavoidably takes a long time.
# -*- coding: utf-8 -*-
"""
Requirement: pip install slackclient
"""
import multiprocessing.dummy, ctypes, time, traceback, datetime
from slackclient import SlackClient
legacy_token = raw_input("Enter token of an admin user. Get it from https://api.slack.com/custom-integrations/legacy-tokens >> ")
slack_client = SlackClient(legacy_token)
name_to_id = dict()
res = slack_client.api_call(
"groups.list", # groups are private channels, conversations are public channels. Different API.
exclude_members=True,
)
print ("Private channels:")
for c in res['groups']:
print(c['name'])
name_to_id[c['name']] = c['id']
channel = raw_input("Enter channel name to clear >> ").strip("#")
channel_id = name_to_id[channel]
pool=multiprocessing.dummy.Pool(4) #slack rate-limits the API, so not much benefit to more threads.
count = multiprocessing.dummy.Value(ctypes.c_int,0)
def _delete_message(message):
try:
success = False
while not success:
res= slack_client.api_call(
"chat.delete",
channel=channel_id,
ts=message['ts']
)
success = res['ok']
if not success:
if res.get('error')=='ratelimited':
# print res
time.sleep(float(res['headers']['Retry-After']))
else:
raise Exception("got error: %s"%(str(res.get('error'))))
count.value += 1
if count.value % 50==0:
print(count.value)
except:
traceback.print_exc()
retries = 3
hours_in_past = int(raw_input("How many hours in the past should messages be kept? Enter 0 to delete them all. >> "))
latest_timestamp = ((datetime.datetime.utcnow()-datetime.timedelta(hours=hours_in_past)) - datetime.datetime(1970,1,1)).total_seconds()
print("deleting messages...")
while retries > 0:
#see https://api.slack.com/methods/conversations.history
res = slack_client.api_call(
"groups.history",
channel=channel_id,
count=1000,
latest=latest_timestamp,)#important to do paging. Otherwise Slack returns a lot of already-deleted messages.
if res['messages']:
latest_timestamp = min(float(m['ts']) for m in res['messages'])
print datetime.datetime.utcfromtimestamp(float(latest_timestamp)).strftime("%r %d-%b-%Y")
pool.map(_delete_message, res['messages'])
if not res["has_more"]: #Slack API seems to lie about this sometimes
print ("No data. Sleeping...")
time.sleep(1.0)
retries -= 1
else:
retries=10
print("Done.")
Note, that script will need modification to list & clear public channels. The API methods for those are channels.* instead of groups.*
As other answers allude, Slack's rate limits make this tricky - the rate limit is relatively low for their chat.delete API at ~50 requests per minute.
The best strategy that respects the rate limit is to retrieve messages from the channel you want to clear, then delete the messages in batches under 50 that run on a minutely interval.
I've built a project containing an example of this batching that you can easily fork and deploy on Autocode - it lets you clear a channel via slash command (and allows you restrict access to the command to just certain users of course!). When you run /cmd clear in a channel, it marks that channel for clearing and runs the following code every minute until it deletes all the messages in the channel:
console.log(`About to clear ${messages.length} messages from #${channel.name}...`);
let deletionResults = await async.mapLimit(messages, 2, async (message) => {
try {
await lib.slack.messages['#0.6.1'].destroy({
id: clearedChannelId,
ts: message.ts,
as_user: true
});
return {
successful: true
};
} catch (e) {
return {
successful: false,
retryable: e.message && e.message.indexOf('ratelimited') !== -1
};
}
});
You can view the full code and a guide to deploying your own version here: https://autocode.com/src/jacoblee/slack-clear-messages/
Tip: if you gonna use the slack cleaner https://github.com/kfei/slack-cleaner
You will need to generate a token: https://api.slack.com/custom-integrations/legacy-tokens
If you like Python and have obtained a legacy API token from the slack api, you can delete all private messages you sent to a user with the following:
import requests
import sys
import time
from json import loads
# config - replace the bit between quotes with your "token"
token = 'xoxp-854385385283-5438342854238520-513620305190-505dbc3e1c83b6729e198b52f128ad69'
# replace 'Carl' with name of the person you were messaging
dm_name = 'Carl'
# helper methods
api = 'https://slack.com/api/'
suffix = 'token={0}&pretty=1'.format(token)
def fetch(route, args=''):
'''Make a GET request for data at `url` and return formatted JSON'''
url = api + route + '?' + suffix + '&' + args
return loads(requests.get(url).text)
# find the user whose dm messages should be removed
target_user = [i for i in fetch('users.list')['members'] if dm_name in i['real_name']]
if not target_user:
print(' ! your target user could not be found')
sys.exit()
# find the channel with messages to the target user
channel = [i for i in fetch('im.list')['ims'] if i['user'] == target_user[0]['id']]
if not channel:
print(' ! your target channel could not be found')
sys.exit()
# fetch and delete all messages
print(' * querying for channel', channel[0]['id'], 'with target user', target_user[0]['id'])
args = 'channel=' + channel[0]['id'] + '&limit=100'
result = fetch('conversations.history', args=args)
messages = result['messages']
print(' * has more:', result['has_more'], result.get('response_metadata', {}).get('next_cursor', ''))
while result['has_more']:
cursor = result['response_metadata']['next_cursor']
result = fetch('conversations.history', args=args + '&cursor=' + cursor)
messages += result['messages']
print(' * next page has more:', result['has_more'])
for idx, i in enumerate(messages):
# tier 3 method rate limit: https://api.slack.com/methods/chat.delete
# all rate limits: https://api.slack.com/docs/rate-limits#tiers
time.sleep(1.05)
result = fetch('chat.delete', args='channel={0}&ts={1}'.format(channel[0]['id'], i['ts']))
print(' * deleted', idx+1, 'of', len(messages), 'messages', i['text'])
if result.get('error', '') == 'ratelimited':
print('\n ! sorry there have been too many requests. Please wait a little bit and try again.')
sys.exit()
Here is a great chrome extension to bulk delete your slack channel/group/im messages - https://slackext.com/deleter , where you can filter the messages by star, time range, or users.
BTW, it also supports load all messages in recent version, then you can load your ~8k messages as you need.
There is a slack tool to delete all slack messages on your workspace. Check it out: https://www.messagebender.com
I do have a hughe database where some data sets link to certain youtube videos. As we all know some youtube videos disappear after a while from youtube and this leads to my solution and my problem as well --> I'd like to check if the youtube video still exists by simply checking via JSON if there is data to retrieve from a video. If not than I'd simply delete that certain data set.
So the first part of my solution would be to go through each row of my data table and checking for each id if there is data to retrieve from youtube as seen in the following code:
$result = $db->query("SELECT id, link FROM songs");
while($row = $result->fetch_assoc())
{
$number = 1+$rown++;
$id = $row['id'];
$link = $row['link'];
$video_ID = $link;
$JSON = file_get_contents("https://gdata.youtube.com/feeds/api/videos/{$video_ID}?v=2&alt=json");
$JSON_Data = json_decode($JSON);
$views = $JSON_Data->{'entry'}->{'yt$statistics'}->{'viewCount'};
echo $number .' row<br />';
echo $link .' link<br />';
echo $views .' views<br /><br />';
}
This attempt works fine and outputs me the data I need. The only problem is, that it just gets me data from the first 150-190 rows and that's it. Now I am checking for a solution that checks each row for empty youtube data and this lead to two concrete questions I have:
1st) Might youtube be responsible for that due to a restriction in retrieving data from one single query?
2nd) Might this be a server issue of mine that stops queries after x-seconds (but I already expand the time limit by putting a line set_time_limit (10000000); into my php code but without success)?
Hope you can help, thanks in advance.
YouTube, naturally, enforces limits on how many requests you can make per period of time. Unfortunately, there are no clear guidelines on what those limits are ... for v2, the guidelines merely state:
The YouTube API enforces quotas to prevent problems associated with
irregular API usage. Specifically, a too_many_recent_calls error
indicates that the API servers have received too many calls from the
same caller in a short amount of time. If you receive this type of
error, then we recommend that you wait a few minutes and then try your
request again.
If time isn't an issue for you, you could slow down each query so that you only make 1 request per every 10-15 seconds or so. Alternatively, you'd probably have better luck batch processing. With this, you can make up to 50 requests at once (this counts as 50 requests against your overall request per day quota, but only as one against your per time quota). Batch processing with v2 of the API is a little involved, as you make a POST request to a batch endpoint first, and then based on those results you can send in the multiple requests. Here's the documentation:
https://developers.google.com/youtube/2.0/developers_guide_protocol?hl=en#Batch_processing
Batch processing is much easier with v3, as you just have the videoId parameter be a comma delimited list of the videos you want info on -- so in your case, you'd execute file_get_contents on a URL like this:
https://www.googleapis.com/youtube/v3/videos?part=id&id={comma-separated-list-of-IDs}&maxResults=50&key={YOUR_API_KEY}
Any video ID in your list that doesn't come back in the JSON response doesn't exist anymore. IF you do 50 at a time, wait for 15 seconds, do another 50, etc., that should give you better performance.