I have a Rails app that has a Picture model using the carrier-wave gem to handle image upload/saving.
Eventually, I plan to have an iOS app POST an image to the Picture model's controller / create action.
Before that, I'd like to test some things locally and simulate the POST event.
Can I do this by encoding/posting via OSX Terminal? I imagine I need to encode the image file into (binary?) and POST it to the controller/action.
The easiest way to simulate this is using the command line utility curl. You can do something like:
curl -X POST -F field1=value1 -F file=#path/to/file.jpg http://example.org/pictures
The -F options allow you to set form field values. For example, your controller might be expecting that a couple form fields will be submitted with the file upload. You can pass in multiple -F name=value options. If the value starts with a #, then curl will read from a file (such as the image you want to upload).
The -X POST makes curl run a POST request to the server. I'm not 100% sure it is necessary, because I think curl will automatically switch to POST since you've included a file with the upload... but it won't hurt anything either.
curl is a very powerful tool. You can get additional information by typing man curl in your OSX Terminal window. It has a lot of options and can handle just about any situation you throw at it.
Related
So, to confirm: I believe I have set everything up correctly as I was able to run the sample code for the recognize-long-running method. It quickly returned a name and a json file with the transcription.
However, when I try to run the same code for my own audio sample, nothing happens. The API dashboard shows that a request came through, but my Terminal hangs with no response. I am using a Mac, High Sierra 10.13.6, and running the code from the command line. I also have a project set up in Google Cloud Platform and have the file in question uploaded in flac format. Noteworthy, perhaps: my sample has a bitrate of 48000, which is higher than their recommended one, so perhaps this is messing things up?
I will paste the sample code that works below, in addition to my code.
Working sample code from Google:
gcloud ml speech recognize-long-running \
'gs://cloud-samples-tests/speech/brooklyn.flac' \
--language-code='en-US' --async
My code:
gcloud ml speech recognize-long-running \ 'gs://interviewtexttospeechconversions/MelvinWeek4.flac’ \
--language-code='en-US' --async --
I think that your terminal is getting hang with no response since you are using a (`) character to close the file name quote instead of the (') character. Additionally, I think is required to remove the -- symbol, located at the end of the gcloud command, in case you are not planning to add another parameter.
gcloud ml speech recognize-long-running 'gs://interviewtexttospeechconversions/MelvinWeek4.flac' --language-code='en-US' --async
Finally, I recommend you to include the sample-rate and encoding parameters that can help you to avoid invalid configuration issues.
I would like to analyze all the posts created by users of my Rails App, which is hosted on Heroku. In the console, I created a variable that contains every word ever posted on the site, which accounts for hundreds of thousands of words. I'd like to export these words from the console to do analysis elsewhere.
I've read from this post that using Tee enables you to get a copy of the output of your console:
How to export a Ruby Array from my Heroku console into CSV?
The problem is that if I try to print all the words, the console always shows '--More--', at which point I press the enter key to reveal more of the text. As you can imagine, for hundreds of thousands of words, it would be impractical for me to keep pressing enter to reveal the entirety of the text. How can I bypass this?
heroku run console | tee output.txt
If you're using tee-trick above, you can just type q to exit your terminal pager program (I assume it's more) since tee writes simultaneously into the standard out (that's why you're seeing all the output and more automatically starts to page it) and to the file you gave it on the argument (output.txt in the case above).
Since you don't need/want to view all the output, just quit more and do what you want with the file.
Have you tried a plain old unix shell redirect?
heroku run console > output.txt
Probably is is better to write a rake task to output your data, so that is not mixed with other things that happen in the console. When you just use stdout (for example puts), then something like this should work:
heroku rake db:postexport > output.txt
I found somethings like this code to send to Telegram-CLI but I have no idea what it means or how to do it so if someone could explain it to me step by step I'd be so happy.
https://github.com/psamim/telegram-cli-backup
I couldn't install sqlite3 for some reason with the given code there.
I'm using windows, do I need to boot into Ubuntu to do it?
Anyways, explain it to me like I'm 3 years old in case I won't know something.
Thanks so much.
If you are using windows check the instructions here. I've only tried in Linux and the link in your question (using Lua) works.
The following scripts too does the job well
Python script to backup everything
https://github.com/tvdstaaij/telegram-json-backup
Here is a ruby version of the same https://github.com/tvdstaaij/telegram-history-dump
The mentioned script is updated and now it saves the conversations into a CSV file and does not need sqlite3 library any more. It only needs lua.
It seems I could help you a little.
I am using Ubuntu and I wrote this Bash-script:
#!/bin/bash
TOKEN='YourBot:Token'
URL='https://api.telegram.org/bot'$TOKEN
UPD_URL=$URL'/getUpdates?offset='
function get_offset {
res=$(curl $UPD_URL$OFFSET)
OFFSET=$(echo $res | grep "update_id" | cut -f 4 -d ':' | cut -f 1 -d ',' | head -1)
OFFSET=$((OFFSET+1))
}
while :
do
get_offset
if echo $res | grep "message"
then echo $res >> BackupChat.txt
fi
done
It is very simple bash-script.
Obviously you must create your own bot and add bot to the chat you want to backup.
Bad things about this script is that it creates pretty difficult for reading text logfile with mass garbage like "username" "date" "::" , etc. But it could be improved for making a normal output looking like a cool database.
I hope you have enough Linux skills to make it by yourself.
I think the situation improved since this question was asked, so here an answer from a 2020's point of view that does not require any programming skills or command line tools.
To backup (aka. "export") your Telegram chats download the desktop client available here:
https://desktop.telegram.org/
On Linux, for example, unpack the downloaded file into any subdirectory like ~/tmp/, and start the client from there, like
$ cd ~/tmp/Telegram
$ ./Telegram
You will need to register first with your phone number like on any other Telegram devices via a confirmation code sent to your already logged-in telegram account.
The user interface looks similar to the web interface.
Go into a chat you are interested in, then in the upper right menu choose "Export chat history". Click all checkboxes you are interested in, like media files, GIFs, stickers, etc. and click export.
By default it generates a complete HTML files and subdirectory structure under ~/Downloads/Telegram Desktop/ which you can open for instance like this:
firefox ~/Downloads/Telegram\ Desktop/ChatExport_01_02_2020/messages.html
If you need a more complete backup of all chats you can go to the central menu (3 small bars) on the top left, then "Settings" -> "Advanced" -> under "Data and Storage" choose "Export Telegram data". There you can also click checkboxes for what you are interested in. Near the bottom is aselection between HTML for humas, or machine-readable JSON.
On the very first export request it first requires you to confirm on another Telegram instance to allow the export request, to avoid misuse. Once you confirmed, e.g. from your mobile phone, you can go to export again and proceed as described above without any further confirmations.
http://my.domain/path/to/file/%E7%8D%85%E5%AD%90%E9%A0%AD.jpg?1371377932
This works just fine.
The browser knows to convert this to 獅子頭.
http://mycdn.cloudfront.net/path/to/file/%E7%8D%85%E5%AD%90%E9%A0%AD.jpg?1371377932
I get this error
ERROR
The request could not be satisfied.
CloudFront wasn't able to connect to the origin.
Generated by cloudfront (CloudFront)
Request ID:
I don't know if you tried it yet, but you could try the Japanese characters instead of their equivalent in unicode. I tested a video name with Russian characters and it worked without problems.
AWS normally supports most character sets.
This is what I did: http://dxxxxx.cloudfront.net/Учебная-программа.mp4
However, it may be possible that it does not work via a CNAME. I never tested that.
You could try to do a curl request like this:
curl -I -H "Host: my.domain" http://mycdn.cloudfront.net/path/to/file/%E7%8D%85%E5%AD%90%E9%A0%AD.jpg?1371377932
That's usually the easiest way to test if your cdn works. There's a -I option in curl to just display the headers. Check if you're getting a 200 statuscode, and not a 404 or other strange responses.
In general dns changes might take time to be fully delegated throughout the internet. Checking can be done with tools like these: https://www.whatsmydns.net
Is there a way to make Facebook re-parse a page to get the updated open graph object?
I know about the linter debug tool but I was wondering if there's an API or something to do it programmatically.
According to the Facebook documentation located at https://developers.facebook.com/docs/technical-guides/opengraph/defining-an-object/ under "Updating Objects"
curl -X POST \
-F "id={object-url OR object-id}" \
-F "scrape=true" \
"https://graph.facebook.com"
all you have to do is hit http://developers.facebook.com/tools/debug/og/object?q={escaped URL} with an authorized session.
I had to re-lint a couple thousand URLs so printed them out as basic links and then used a browser plugin to download them all so it only ended up taking 15 mins.