I want to retrieve all the messages that were sent in my teams slack domain. Although, I'd prefer that the data be received in XML or JSON I am able to handle the data in just about any form.
How can I retrieve all these messages? Is it possible? If not, can I retrieve all the messages for a specific channel?
If you need to do this dynamically via API you can use the channels.list method to list all of the channels in your team and channels.history method to retrieve the history of each channel. Note that this will not include DMs or private groups.
If you need to do this as a one time thing, go to https://my.slack.com/services/export to export your team's message archives as series of JSON files
This Python script exports everything to JSON by a simple run:
https://gist.github.com/Chandler/fb7a070f52883849de35
It creates the directories for you and you have the option to exclude direct messages or channels.
All you need to install is the slacker module, which is simply pip install slacker. Then run it with --token='secret-token'. You need a legacy token, which is available here at the moment.
For anyone looking for Direct Message history downloads, this node based cli tool allows you to download messages from DMs and IMs in both JSON and CSV. I've used it, and it works very well.
With the new Conversations API this task is bit easier now. Here is a full overview:
Fetching messages from a channel
The new API method conversations.history will allow you to download messages from every type of conversation / channel (public, private, DM, Group DM) as long as your token has access to it.
This method also supports paging allowing you to download large amounts of messages.
Resolving IDs to names
Note that this method will return messages in a raw JSON format with IDs only, so you will need to call additional API method to resolve those IDs into plain text:
user ID: users.list
channel IDs: conversations.list
bot IDs: bots.info (there is no official bots.list method, but there is an unofficial one, which might help in some cases)
Fetching threads
In addition use conversations.replies to download threads in a conversation. Threads function a bit like conversations within a conversation and need to be downloaded separately.
Check out this page of the official documentation for more details on threading.
If anyone is still looking for a solution in 2021, and of course have no assistance from their workspace admins to export messages then obviously they can do the following.
Step 1: Get the api token from your UI cookie
Clone and install requirements and run SlackPirate
Open slack on a browser and copy the value of the cookie named d
Run python3 SlackPirate.py --cookie '<value of d cookie>'
Step 2: Dump the channel messages
Install slackchannel2pdf (Requires python)
slackchannel2pdf --token 'xoxb-1466...' --write-raw-data T0EKHQHK2/G015H62SR3M
Step 3: Dump the direct messages
Install slack-history-export (Requires node)
slack-history-export -t 'xoxs-1466...' -u '<correct username>' -f 'my_colleagues_chats.json'
I know that this might be late for the OP, but if anyone is looking for a tool capable of doing the full Slack Workspace export, try Slackdump it's free and open source (I'm the author, but anyone can contribute).
To do the workspace export, run it with -export switch:
./slackdump -export my_export.zip
If you need to download attachments as well, use the -f switch (stands for "files"):
./slackdump -f -export my_export.zip
It will open the browser asking you to login. If you need to do it headless, grab a token and cookie, as described in the documentation
It will generate the export file that would be compatible with another nice tool slack-export-viewer.
In order to retrieve all the messages from a particular channel in slack this can be done by using conversations.history method in slack_sdk library in python.
def get_conversation_history(self, channel_id, latest, oldest):
"""Method to fetch the conversation history of particular channel"""
try:
result = client.conversations_history(
channel=channel_id,
inclusive=True,
latest=latest,
oldest=oldest,
limit=100)
all_messages = []
all_messages += result["messages"]
ts_list = [item['ts'] for item in all_messages]
last_ts = ts_list[:-1]
while result['has_more']:
result = client.conversations_history(
channel=channel_id,
cursor=result['response_metadata']['next_cursor'],
latest=last_ts)
all_messages += result["messages"]
return all_messages
except SlackApiError as e:
logger.exception("Error while fetching the conversation history")
Here, i have provided latest and oldest timestamps to cover a time range when we need to collect the messages from the all messages in conversation history.
And the cusor argument is being used to point the next cursor value as this method can only collect 100 messages at one time but it supports pagination through which we can point the next cursor value from result['response_metadata']['next_cursor'].
Hope this will be helpful.
Here is another tool for exporting all messages from a channel.
The tool is called slackchannel2pdf and will export all messages from a public or private channel to a PDF document.
You only need a token with the required scopes and access.
Related
I have an old Conversation's Channel ACTIVE with my phone number, so the incoming messages are not entering the Studio's flow. I need the Conversation's Channel SID so I can close that channel using the API Explorer.
The only data I have is that the message was received using the Monitor>Messaging screen in Twilio Console.
Monitor Image description
But I don't find an API that I can use to get the SID I need with the data I have
Right now, I built a bash script to fetch all conversations and (when the script finishes) I will grep the whatsapp number to find the conversation SID.
#!/bin/bash
ACCOUNT_SID=ACXXXXXXXXXXXXXXXXX
AUTH_TOKEN=XXXXXXXXXXXXX
PAGE_SIZE=100
DOWNLOAD_PAGES=1000
CURRENT_PAGE=${DOWNLOAD_PAGES}
DOWNLOAD_DIR=downloaded_channels
rm -rf ${DOWNLOAD_DIR}
mkdir -p ${DOWNLOAD_DIR}
while (( ${CURRENT_PAGE} >= 0 )); do
echo "downloading Page ${CURRENT_PAGE}"
curl 'https://conversations.twilio.com/v1/Conversations' -u ${ACCOUNT_SID}:${AUTH_TOKEN} > ${DOWNLOAD_DIR}/page_${CURRENT_PAGE}.json
sleep 0.1
(( CURRENT_PAGE=$CURRENT_PAGE-1 ))
done
The problem with this approach is that I have thousands (if not millons) of conversations and Twilio's API allow result pages of 100 elements max. So if I'm not lucky to find my conversation in the first downloaded pages, I will be here forever.
Does anyone know a better approach for this?
If you're using the Conversations API, you can use the Participants Conversations API to search by your address on Twilio and get all conversation assigned to your number.
Follow the documentation of API: https://www.twilio.com/docs/conversations/api/participant-conversation-resource
Follow an example using the Twilio CLI:
twilio api:conversations:v1:participant-conversations:list --address "whatsapp:+myNumber" --properties conversationSid,conversationState --no-limit
If you're using the Programmable Chat, doesn't have an API to search by your phone, but you can search all and use a filter to get just the Channels with your number on attributes (The process can during a long time depending of the quantity of conversations in your account).
Follow an example using the Twilio CLI:
twilio api:chat:v2:services:channels:list --service-sid <service_sid> --properties sid,attributes --no-limit | grep myNumber
I hope that it can help you! :D
Twilio fanatic here.
According to the Conversation resource page, as you are using CURL you can specify the request parameters in the request URL, similar to how you would using Twilio's native API functions (read()).
From the Conversation Resource documentation (modified to include State and DateCreated parameters):
curl -X POST "https://conversations.twilio.com/v1/Conversations" \
--data-urlencode "State=active" \
--data-urlencode "DateCreated=>=YYYY-MM-DD" \
-u $TWILIO_ACCOUNT_SID:$TWILIO_AUTH_TOKEN
Please be sure to urlencode the DateCreated parameter as it includes non-URL friendly symbols (>= to reference on or after- I am not sure if the written format is suitable for bash as-is).
You may use any of the 10 available Conversation parameters to specify which Conversations are returned. As stated, one or more lines or variable names may not be suited for your application as-is, but with a bit of finagling should produce the result you're looking for. For example, DateModified or looking for a custom value placed under attributes might suit your case better.
If this helps, please upvote. I am trying to get noticed by Twilio and get hired to fulfill my dream of working while RVing fulltime.
I am trying to extract a large amount of details out of our Eloqua system using it's API and got this API to work perfectly for single IDs: https://docs.oracle.com/en/cloud/saas/marketing/eloqua-rest-api/op-api-rest-1.0-data-contact-id-get.html
The problem is that I need to run this for a large number of IDs and it will require alot in order to run it for the entire population. Is there any bulk APIs that can extract all of the following details out of Eloqua/Contact for the entire population? I don't see any on that pages documentation that meet this need under the Bulk section.
contactid, company, employees, company_revenue, business_phone, email_address, web_domain, date_created, date_modified, address_1, address_2, city, state_or_province, zip_or_postal_code, mobile_phone, first_name, last_name, title
It's a multi-step process with the Bulk API, typically in the following fashion:
Get a list of the current internal field names - useful for creating your export definition
Create an export definition and post it here. There is a useful example on the page, you do not need a filter criteria. Store the export ID somewhere
Using your export definition id, create a sync. It will gather the data in the background and prepare it for you. Take note of the sync ID provided in the initial response.
Check on the sync status with your sync ID here. It should only take a couple of minutes - and there is a callback url option as well in the previous step, if you don't want to keep polling.
Once your data is ready, use that sync id and request the data. Depending on how many rows were retrieved, you might need to paginate through the results using the offset query param. By default it will give you JSON, but I usually choose CSV (specify in the header).
If you need updated data, feel free to create a new sync using the same export definition id. You do not need to create a new export definition each time.
First of all please share if there is any MSGraph SDK official documentation anywhere that I can use for reference.
I have a scenario, where I want to query all manager and member links from AAD without providing the user and group objectID respectively. This is currently supported in DQ channel, i.e. I can do something like this using MsGraphSDK:
MsGraphClient.Users.Delta().Request().Select("manager")
OR
MsGraphClient.Groups.Delta().Request().Select("members")
I don't want to use DQ for initial-sync due to performance problems, and other issues.
My fallback option is to query through Graph directly, so I want to do something like the following, but this doesn't return any result:
MsGraphClient.Users.Request().Select("manager")
OR
MsGraphClient.Groups.Request().Select("members")
It looks like this isn't even supported currently at the lower (AADGraph) layer. Please correct me if I am wrong, and provide a solution if any!
So my fallback approach is to pull all the user and group aadObjectIds, and explicitly query the manager and member links respectively.
In my case, there can potentially be 500K User-Objects in AAD, and I want to avoid making 500K separate GetManager calls to AAD. Instead, I want to batch the Graph requests as much as possible.
I wasn't able to find much help from the Internet on sending Batch requests through SDK.
Here's what I am doing:
I have this BatchRequestContent:
var batchRequestContent = new BatchRequestContent();
foreach (string aadObjectId in aadObjectIds)
{
batchRequestContent.AddBatchRequestStep(new BatchRequestStep(aadObjectId, Client.Users[aadObjectId].Manager.Request().GetHttpRequestMessage()));
}
and I am trying to send a BatchRequest through GraphSDK with this content to get a BatchResponse. Is this currently supported in SDK? If yes, then what's the procedure? Any documentation or example? How to read the batch-response back? Finally, is there any limit for the # of requests in a batch?
Thanks,
Here is a related post: $expand=manager does not expand manager
$expand is currently not supported on the manager and directReports relationships in the v1.0 endpoint. It is support in the beta endpoint but
the API returns way to much throw away information: https://graph.microsoft.com/beta/users?$expand=manager
The client library partially supports Batch at this time although we have a couple of pull requests to provide better support
with the next release (PR 1 and 2).
To use batch with the current library and your authenticated client, you'll do something like this:
var authProv = MsGraphClient.AuthenticationProvider;
var httpClient = GraphClientFactory.Create(authProv);
// Send batch request with BatchRequestContent.
HttpResponseMessage response = await httpClient.PostAsync("https://graph.microsoft.com/v1.0/$batch", batchRequestContent);
// Handle http responses using BatchResponseContent.
BatchResponseContent batchResponseContent = new BatchResponseContent(response);
Problem
Using the Twilio REST API, I want to request only messages that I haven't downloaded yet. It seems the cleanest way to do this would be to download only messages after a specified SID.
Information not in the docs
The Twilio filter docs don't have this option. They only describe to, from, and date_sent.
However, it appears that Twilio does have this feature. You can see in their paging information, that the the nextpageuri contains AfterSid.
When browing the website, the URL contains /user/account/log/messages?after=SMXXXXXX
What I've tried so far
Using the twilio-ruby client, I have tried the following without success:
list = #client.account.sms.messages.list({after: 'SMXXXXXX'})
list = #client.account.sms.messages.list({AfterSid: 'SMXXXXXX'})
list = #client.account.sms.messages.list({after_sid: 'SMXXXXXX'})
From Dan Markiewicz - Twilio Customer Support
Unfortunately, we do not support filtering by this field in our API at this time. Your best option would be to get the DateCreated info on the SID you want to filter by and then use that to filter the messages by only those sent after that date. Since the date filter only supports filtering down to the day, it may return some number of unwanted messages that were sent that day but before the message you want to filter by. However, each message in the list will have a full date_created field down to the second, so you should be able to filter these out fairly easily on your end. This should produce the result you need.
After looking at the documentation you outlined, it looks like what you want to accomplish can't be done by the twilio-ruby gem. This link shows which filters are supported by the list method inside the gem in regards to messages.
If you look at the source here, starting on line 45 the gem uses next_page_uri as a way of determining the offset of where the next page should begin. For instance:
calls = twilio_client.account.calls.list # returns the initial set of calls.
calls.next_page # this uses next_page_uri to return the next set of calls internally.
This isn't something that can be changed via the gem currently.
I am a python developer, not a QB user, so please forgive me for using poor terminology.
We have an internal web application that is used by finance staff to identify small sets of payment requests for a particular payee. They then use QB to generate a check to pay that payee. Then they manually enter the check number and date back into the web app. I am modifying the web application so that the finance staff can use the app to bundle the payment requests together and - I hope - send a request to QB for the checks, one per payee. The data to send would be the payee information, the check amount, and whatever else is required. The data to get back from QB (later) would be the check number and check date and some identifier that I can use to match it up with the requests sent earlier.
What is the best way to implement this communication with QB?
I use python3, to construct qbXML queries, and Elementree to parse the responses.
I use a non-Windows machine for developement, too. I found that I really needed QB and Python together in a Windows VM to make progress. Both QB and COM require it.
Here are a couple snippets in Python 3.1 to show how I do it:
First, use COM to connect to QuickBooks and disconnect from QuickBooks.
import win32com.client
def start(external_application_name):
"""Connect a QuickBooks instance using COM. Start QB if needed"""
SessionManager = win32com.client.Dispatch("QBXMLRP2.RequestProcessor")
# Example only - insecure!
SessionManager.OpenConnection('', external_application_name)
return SessionManager
def stop(SessionManager):
"""Disconnect from the existing Quickbooks instance."""
SessionManager.CloseConnection()
return
Next, I use python to do the logic, construct the qbXML queries, and parse the responses into an ElementTree.
import xml.etree.ElementTree as etree
def xml_query(QB_SessionManager, company_file_path, qbXML_query_string):
"""
Do a QuickBooks QBXML query.
The query must be valid - this function doesn't do error checking.
Return an ElementTree of the XML response.
"""
ticket = QB_SessionManager.BeginSession(company_file_path, 0)
response_string = QB_SessionManager.ProcessRequest(ticket, qbXML_query_string)
#print(response_string) # Debug tool
SessionManager.EndSession(ticket)
QBXML = etree.fromstring(response_string)
response_tree = QBXML.find('QBXMLMsgsRs')
#etree.dump(QBXML) # Debug tool
return response_tree
The actual qbXML query string and response string for a check query are on the qbXML reference at https://member.developer.intuit.com/qbSDK-current/Common/newOSR/index.html
There, you will see that you can download the check data filtered by payee, date range, check number range, etc.
You can chain multiple XML queries or transactions into a single large XML file. Assign each a unique request number (Example: <CustomerQueryRq requestID="1"> ), so you can locate the proper response.
Use the <IncludeRetElement> tag to limit the reply size, and speed the search tremendously.
As your app is not a SaaS app, so you can use qbsdk for this use case.
sdk download link - https://developer.intuit.com/docs/0025_quickbooksapi/0055_devkits
IIF files are no longer supported. IIF files can be imported directly into QB, but it would bypass all the business logic.