Is there a way to call a private/protected twilio function? - twilio

This is my first time using twilio and I start with the new twilio-cli and I create new project to build and deploy a backend over twilio functions, but I need that some of the functions keep in private, and I want to call that function through their specific api-endpoint but, I always receive the message "Unauthorized - you are not authenticated to perform this request"
This is the plugin that I am using with twilio-cli https://github.com/twilio-labs/plugin-serverless to start the basic project to deploy to twilio.
I already tried to use the curl documentation that I found here: https://www.twilio.com/docs/studio/rest-api/execution but none of the example execute the function.
curl -X POST 'https://serverless.twilio.com/v1/Services/ZSXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX/Functions/ZHXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX' \
-u ACXXXXXXXXXXXX:your_auth_token
I just need to receive a hello world message, this is the code of the function:
exports.handler = function(context, event, callback) {
const twiml = new Twilio.twiml.MessagingResponse();
twiml.message("Hello World!");
console.log("Track this");
callback(null, twiml);
};

The accepted answer doesn't actually answer the question.
To call a protected function, you must provide a signature in a X-Twilio-Signature header. This is how to create such a signature (according to the official docs):
Take the full URL of the request URL you specify for your phone number or app, from the protocol (https...) through the end of the query string (everything after the ?).
If the request is a POST, sort all of the POST parameters alphabetically (using Unix-style case-sensitive sorting order).
Iterate through the sorted list of POST parameters, and append the variable name and value (with no delimiters) to the end of the URL string.
Sign the resulting string with HMAC-SHA1 using your AuthToken as the key (remember, your AuthToken's case matters!).
Base64 encode the resulting hash value.
Official docs: https://www.twilio.com/docs/usage/security#validating-requests

Heyooo. πŸ‘‹ Twilio developer evangelist here.
If you followed the serverless plugin init process by running twilio serverless:init you should have the following project structure.
.
β”œβ”€β”€ assets
β”‚Β Β  β”œβ”€β”€ index.html
β”‚Β Β  β”œβ”€β”€ message.private.js
β”‚Β Β  └── style.css
β”œβ”€β”€ functions
β”‚Β Β  β”œβ”€β”€ hello-world.js
β”‚Β Β  β”œβ”€β”€ private-message.js
β”‚Β Β  └── sms
β”‚ └──reply.protected.js
β”œβ”€β”€ node_modules
β”œβ”€β”€ package-lock.json
└── package.json
These files result in the following HTTP endpoints after you run twilio serverless:deploy. (you will have a different domain).
Deploying functions & assets to the Twilio Runtime
Account SK6a...
Token kegH****************************
Service Name foo-2
Environment dev
Root Directory /private/tmp/foo
Dependencies
Env Variables
βœ” Serverless project successfully deployed
Deployment Details
Domain: foo-3513-dev.twil.io
Service:
foo (ZS8...)
Environment:
dev (ZE0...)
Build SID:
ZB9...
Functions:
[protected] https://foo-3513-dev.twil.io/sms/reply
https://foo-3513-dev.twil.io/hello-world
https://foo-3513-dev.twil.io/private-message
Assets:
[private] Runtime.getAssets()['/message.js']
https://foo-3513-dev.twil.io/index.html
https://foo-3513-dev.twil.io/style.css
Have a close look at the Runtime Urls in the functions block. These are the endpoints that will be available. As you see the bootstrap project includes two public functions (/hello-world and /private-message). You can call these with curl or your browser.
Additionally, there is one protected function (/sms/reply). This function available for calls from within Twilio.
This means that protected functions expect a valid Twilio signature. You can read about that here. If you connect e.g. Studio to call the function it will work because the webhook includes a Twilio signature. If you want to curl it you have to provide X-Twilio-Signature header.
Hope this helps. :)

Related

service worker not intercepting fetch events reliably

I have a webapp that is implemented with flask and nginx (in docker environment)
I want to add a service worker
so I read here how to set the configuration such that the scope is the root directory ('/')
When I start the application I can see that my service worker registers, installed and activated. This happens repeatedly as expected.
But I have a problem to intercept the fetch commands reliably.
Using chrome devtools, if I set a breakpoint in the install, wait and continue,
then sometimes, the GET operations are routed to the service worker (I can see the console printout from the fetch event listener in the service worker).
When I get to this state, then all fetch events are intercepted, as expected
But if I remove the breakpoints and run the program normally, the service worker doesn't intercept the fetch events.
I read here that the scope of the service worker can cause to miss of routing. But in such case the miss is systematic, i.e. the path which is not in the scope is never intercepted
This is not my case, because with certain conditions, my service worker does intercept the fetch calls.
My settings are below.
Thanks,
Avner
# the file structure
/usr/src/app/web/
β”œβ”€β”€ V1
β”‚Β Β  β”œβ”€β”€ js
β”‚Β Β  β”‚Β Β  └── mlj
β”‚Β Β  β”‚Β Β  └── ...
β”‚Β Β  β”‚Β Β  β”œβ”€β”€ main.js
β”‚Β Β  β”œβ”€β”€ ...
β”œβ”€β”€ project
β”‚Β Β  β”œβ”€β”€ sites
β”‚Β Β  β”‚Β Β  └── ...
β”‚Β Β  β”‚Β Β  └── views.py
└── sw1.js
------------------------------------------------------------
# the file that registers the service worker
cat main.js
...
navigator.serviceWorker.register("../../../sw1.js", {scope: '/'})
.then(regisration => console.log('SW Registered1'))
.catch(console.error);
------------------------------------------------------------
# the service worker
cat sw1.js
const version = 1;
self.addEventListener('install', function(event) {
console.log ('SW v%s installed at', version, new Date().toLocaleTimeString());
});
self.addEventListener('activate', function(event) {
console.log ('SW v%s activated at', version, new Date().toLocaleTimeString());
});
self.addEventListener('fetch', function(event) {
console.log ('SW v%s fetched at', version, new Date().toLocaleTimeString());
if(!navigator.onLine) {
event.respondWith(new Response('<h1> Offline :( </h1>', {headers: {'Content-Type': 'text/html'}}));
}
else {
console.log (event.request.url);
event.respondWith(fetch(event.request));
}
});
------------------------------------------------------------
# the route to the service worker in the flask python file
cat web/project/sites/views.py
...
from flask import current_app, send_from_directory
...
#sites_blueprint.route('/sw1.js', methods=['GET'])
def sw():
# /usr/src/app
root_dir = os.path.dirname(os.getcwd())
filename = 'sw1.js'
# /usr/src/app/web
dir1 = os.path.join(root_dir, 'web')
return send_from_directory(dir1, filename)
I found out here that on the Chrome Developer Tools Network tab, if Disable cache is checked, requests will go to the network instead of the Service Worker, i.e. Service Worker does not get a fetch event.
After enabling the cache by unchecking the button Disable cache (in Chrome devtool -> Network -> Disable cache), fetch events are now intercepted by the service worker.
p.s. Note that using the shortcut to bypass the cache, e.g. in Chrome: Ctrl-F5, Shift-F5, and in Firefox: Ctrl-F5, Ctrl-Shift-R achieves the same effect as unchecking the button Disable cache

Custom plugin for Kong v1.0.2 is enabled but not installed

I have a custom plugin for Kong which worked fine for Kong v0.14.1 but after I upgraded to v.1.0.2 it's throwing an error.
OS used: macOS Mojave
In kong.conf file I have this code:
log_level = debug
plugins=my-custom-plugin
I try to start Kong with this command:
kong start -c kong.conf
and I get this error:
Error: /usr/local/share/lua/5.1/kong/cmd/start.lua:50: nginx: [error] init_by_lua
error: /usr/local/share/lua/5.1/kong/init.lua:344: my-custom-plugin plugin is enabled but not installed;
module 'kong.plugins.my-custom-plugin.handler' not found:No LuaRocks module found for kong.plugins.my-custom-plugin.handler
no field package.preload['kong.plugins.my-custom-plugin.handler']
no file './kong/plugins/kong-my-custom-plugin/handler.lua'...
I installed the plugin using this command:
luarocks make
which gave the following output:
my-custom-plugin 1.0-1 is now installed in /usr/local/opt/kong (license: MIT)
Somehow, it appears that Kong is unable to find my installed custom plugin. Any idea why this happens?
#user5377037's answer has most of the relevant details, I just wanted to mention that as of Kong 0.14.x, "custom_plugins" is now just "plugins".
One of the reasons for this change is that you can now use this new variable name to choose to load or not to load plugins that are bundled with Kong -- a useful feature for some. However, if you want to load your custom plugin AND the bundled plugins, you now have to specify the bundled keyword to indicate that you want to keep the bundled plugins loaded.
Pre 0.14.x
The practical effect is that in Kong < 0.14.x:
custom_plugins = plugin1,plugin2
Or
KONG_CUSTOM_PLUGINS=<plugin-name>
Post 0.14.x
In Kong >= 0.14.x, you now write:
plugins = bundled,plugin1,plugin2
Or
KONG_PLUGINS=bundled,<plugin-name>
If You Don't Use bundled
If you don't add the bundled keyword, you'll likely face something like this error:
nginx: [error] init_by_lua error: /usr/local/share/lua/5.1/kong/init.lua:292: key-auth plugin is in use but not enabled
stack traceback:
[C]: in function 'assert'
/usr/local/share/lua/5.1/kong/init.lua:292: in function 'init'
init_by_lua:3: in main chunk
This means that you've set your proxy to use some plugin, but now you aren't loading that plugin on startup so Kong doesn't know what to do and quits. Essentially, you will only be loading your one single custom plugin which probably isn't what you want.
lua_package_path
The notes about lua_package_path and KONG_LUA_PACKAGE_PATH remain the same as in user5377037's post.
References
Upgrade Documentation
Configuration Reference
Load the plugin
You must now add the custom plugin’s name to the custom_plugins list in your Kong configuration (on each Kong node):
custom_plugins = <plugin-name>
If you are using two or more custom plugins, insert commas in between, like so:
custom_plugins = plugin1,plugin2
Note: You can also set this property via its environment variable equivalent: KONG_CUSTOM_PLUGINS or define custom plugin in configuration property like:
KONG_CUSTOM_PLUGINS=<plugin-name> kong start
Reminder: don’t forget to update the custom_plugins directive for each node in your Kong cluster.
Verify loading the plugin
You should now be able to start Kong without any issue. Consult your custom plugin’s instructions on how to enable/configure your plugin on an API or Consumer object.
To make sure your plugin is being loaded by Kong, you can start Kong with a debug log level:
log_level = debug
OR:
KONG_LOG_LEVEL=debug
Then, you should see the following log for each plugin being loaded:
[debug] Loading plugin <plugin-name>
And here is workaround steps for adding things in custom_plugins and lua_package_path.
Add custom plugin name in : custom_plugins = <plugin-name>
Install hello-world plugin by using following steps :
If you have source code of your plugin then move into it and execute the command : luarocks make it will install your plugin.
Now you have to execute a command : make install-dev make sure your plugin have makefile like as:
Once you execute this command make install-dev. It will create lua file at a location something like that :
/your-plugin-path/**lua_modules/share/lua/5.1/kong/plugins/your-plugin-name/**?.lua
Just copy this path and add it into the kong configuration file in lua_package_path
lua_package_path=/your-plugin-path/lua_modules/share/lua/5.1/kong/plugins/your-plugin-name/?.lua
Just start kong : kong start --vv
It looks like it can't find the handler.lua file, which is required. Can you run $ tree . at the root of your plugin project?
Here's the result of that same command for a test plugin I did a while back (https://github.com/jerneyio/kong-plugin-header-echo)
$ tree .
.
β”œβ”€β”€ README.md
β”œβ”€β”€ kong
β”‚Β Β  └── plugins
β”‚Β Β  └── kong-plugin-header-echo
β”‚Β Β  β”œβ”€β”€ handler.lua
β”‚Β Β  └── schema.lua
β”œβ”€β”€ kong-plugin-header-echo-0.1.0-1.all.rock
└── kong-plugin-header-echo-0.1.0-1.rockspec
Also, are you sure your handler.lua is exposed in your rockspec? Again, successful example here:
$ cat kong-plugin-header-echo-0.1.0-1.rockspec
package = "kong-plugin-header-echo"
version = "0.1.0-1"
source = {
url = "git//github.com/jerneyio/kong-plugin-header-echo.git"
}
description = {
homepage = "https://github.com/jerneyio/kong-plugin-header-echo",
license = "MIT"
}
dependencies = {
"lua >= 5.3",
"kong >= 0.14"
}
build = {
type = "builtin",
modules = {
["kong.plugins.kong-plugin-header-echo.handler"] = "kong/plugins/kong-plugin-header-echo/handler.lua",
["kong.plugins.kong-plugin-header-echo.schema"] = "kong/plugins/kong-plugin-header-echo/schema.lua"
}
}

Generate nodejs from Swagger spec

So I've documented my whole API with swagger editor, and now I have my .yaml file. I'm really confused how I take that and generate the whole nodejs stuff now so that all those functions are already defined and then I just fill them in with the appropriate code.
Swagger Codegen generates server stubs and client SDKs for a variety of languages and frameworks, including Node.js.
To generate a Node.js server stub, run codegen with the -l nodejs-server argument.
Windows example:
java -jar swagger-codegen-cli-2-2-2.jar generate -i petstore.yaml -l nodejs-server -o .\PetstoreServer
You get:
.
β”œβ”€β”€ api
|    └── swagger.yaml
β”œβ”€β”€ controllers
|Β Β Β Β β”œβ”€β”€ Pet.js
|Β Β Β Β β”œβ”€β”€ PetService.js
|Β Β Β Β β”œβ”€β”€ Store.js
|Β Β Β Β β”œβ”€β”€ StoreService.js
|Β Β Β Β β”œβ”€β”€ User.js
|    └── UserService.js
β”œβ”€β”€ index.js
β”œβ”€β”€ package.json
β”œβ”€β”€ README.md
└── .swagger-codegen-ignore

Load GTK-Glade translations in Windows using Python/PyGObject

I have a Python script that loads a Glade-GUI that can be translated. Everything works fine under Linux, but I am having a lot of trouble understanding the necessary steps on Windows.
All that seems necessary under Linux is:
import locale
[...]
locale.setlocale(locale.LC_ALL, locale.getlocale())
locale.bindtextdomain(APP_NAME, LOCALE_DIR)
[...]
class SomeClass():
self.builder = Gtk.Builder()
self.builder.set_translation_domain(APP_NAME)
locale.getlocale() returns for example ('de_DE', 'UTF-8'), the LOCALE_DIR just points at the folder that has the compiled mo-files.
Under Windows this makes things more difficult:
locale.getlocale() in the Python console returns (None, None) and locale.getdefaultlocale() returns ("de_DE", "cp1252"). Furthermore when one tries to set locale.setlocale(locale.LC_ALL, "de_DE") will spit out this error:
locale.setlocale(locale.LC_ALL, "de_DE")
File "C:\Python34\lib\locale.py", line 592, in setlocale
return _setlocale(category, locale)
locale.Error: unsupported locale setting
I leave it to the reader to speculate why Windows does not accept the most common language codes. So instead one is forced to use one of the below lines:
locale.setlocale(locale.LC_ALL, "deu_deu")
locale.setlocale(locale.LC_ALL, "german_germany")
Furthermore the locale module on Windows does not have the bintextdomain function. In order to use it one needs to import ctypes:
import ctypes
libintl = ctypes.cdll.LoadLibrary("intl.dll")
libintl.bindtextdomain(APP_NAME, LOCALE_DIR)
libintl.bind_textdomain_codeset(APP_NAME, "UTF-8")
So my questions, apart from how this works, is:
Which intl.dll do I need to include? (I tried the gnome/libintl-8.dll from this source: http://sourceforge.net/projects/pygobjectwin32/, (pygi-aio-3.14.0_rev19-setup.exe))
How can I check if the e.g. locale deu_deu gets the correct /mo/de/LC_MESSAGES/appname.mo/?
Edit
My folder structure (Is it enough to have a de folder? I tried using a deu_deu folder but that did not help):
β”œβ”€β”€ gnome_preamble.py
β”œβ”€β”€ installer.cfg
β”œβ”€β”€ pygibank
β”‚Β Β  β”œβ”€β”€ __init__.py
β”‚Β Β  β”œβ”€β”€ __main__.py
β”‚Β Β  β”œβ”€β”€ mo
β”‚Β Β  β”‚Β Β  └── de
β”‚Β Β  β”‚Β Β  └── LC_MESSAGES
β”‚Β Β  β”‚Β Β  └── pygibank.mo
β”‚Β Β  β”œβ”€β”€ po
β”‚Β Β  β”‚Β Β  β”œβ”€β”€ de.po
β”‚Β Β  β”‚Β Β  └── pygibank.pot
β”‚Β Β  β”œβ”€β”€ pygibank.py
β”‚Β Β  └── ui.glade
└── README.md
I put the repository here: https://github.com/tobias47n9e/pygobject-locale
And the compiled Windows installer (64 bit) is here: https://www.dropbox.com/s/qdd5q57ntaymfr4/pygibank_1.0.exe?dl=0
Short summary of the answer
The mo-files should go into the gnome-packages in this way:
β”œβ”€β”€ gnome
β”‚ └── share
β”‚ └── locale
β”‚ └── de
| └── LC_MESSAGES
| └── pygibank.mo
You are close. This is a very complicated subject.
As I wrote in Question 10094335 and in Question 3678174:
To setup the locale to user current locale do not call:
locale.setlocale(locale.LC_ALL, locale.getlocale())
Simply call:
locale.setlocale(locale.LC_ALL, '')
As explained in Python setlocale reference documentation.
This sets the locale for all categories to the user’s default setting (typically specified in the LANG environment variable).
Note that Windows doesn't have the LANG environment variable set up, so, you need to do this before that line:
import sys
import os
import locale
if sys.platform.startswith('win'):
if os.getenv('LANG') is None:
lang, enc = locale.getdefaultlocale()
os.environ['LANG'] = lang
This will also make gettext to work for in-Python translations.
How this work you can check it in the source code here:
https://github.com/python/cpython/blob/master/Modules/_localemodule.c#L90
In particular, the error you're getting:
locale.Error: unsupported locale setting
Is expressed here:
https://github.com/python/cpython/blob/master/Modules/_localemodule.c#L112
Which is just a generic error message that the C call setlocale failed with the given parameters.
The C call setlocale is defined in the locale.h header. In Linux, this is:
Linux locale.h
In Windows, this is the one used:
Windows locale.h
In Windows locale.h documentation you can read:
The set of language and country/region strings supported by setlocale are listed in Language Strings and Country/Region Strings.
And that points to:
Visual Studio 2010 Language Strings
Visual Studio 2010 Country/Region Strings
As you can see, for the 2010 version the setlocale function expects the locale in the format you found out: deu_deu, that differ from the one expected by the Linux version de_DE. Your only option is to use a list of os-dependent locales to setup the locale. Very very sad indeed.
There is another issue here. If you change the version of the toolchain you can see that newer version of the setlocale function now work more closelly to what Linux/POSIX does:
Visual Studio 2015 Language Strings
american english en-US
Visual Studio 2010 is the last release to support the old format, starting from version 2012 the new locale format is expected.
As you can imagine, the one you need to use depends on the version of the toolchain for which the CPython interpreter you're using was built to. I don't know which version are you using but according to the official Python Developer's Guide:
Python 3.5 and later use Microsoft Visual Studio 2015. [...]
Python 3.3 and 3.4 use Microsoft Visual Studio 2010. [...]
Most Python versions prior to 3.3 use Microsoft Visual Studio 2008. [...]
That is all related to the Python locale module. Now, for the gettext module or the gettext related functions in the locale module, this is another C library called libintl.
libintl is what is called the C library that is part of gettext for all this translation magic:
libintl reference documentation
One relevant part of this documentation says:
Note that on GNU systems, you don’t need to link with libintl because the gettext library functions are already contained in GNU libc.
But in Windows, because of the issues explained in Question 10094335 you need to load the libintl library that is being used by PyGObject, that is, the very same that it was linked during build. That is done doing the steps you already wrote.
Which intl.dll do I need to include? (I tried the gnome/libintl-8.dll from this source: http://sourceforge.net/projects/pygobjectwin32/, (pygi-aio-3.14.0_rev19-setup.exe))
So, yes. The one that was used to link against when the pygobject AIO was build.
How can I check if the e.g. locale deu_deu gets the correct /mo/de/LC_MESSAGES/appname.mo/
Configure a few messages and note if they show translated. Just a note, is not a folder "/mo/de/LC_MESSAGES/appname.mo/", appname.mo is a file.
Check my first answer to how to create the translation .po file from the Glade file.

Dart scripts that invoke scripts by importing them

I have this setup:
β”œβ”€β”€ bin
β”‚Β Β  β”œβ”€β”€ all.dart
β”‚Β Β  β”œβ”€β”€ details
β”‚Β Β  β”‚Β Β  β”œβ”€β”€ script1.dart
β”‚Β Β  β”‚Β Β  └── script2.dart
| | .....
all.dart simply imports script1.dart and script2.dart and calls their main. The goal is to have a bunch of scripts under details that can be run individually. Additionally I want a separate all.dart script that can run them all at once. This will make debugging individual scripts simpler, yet still allowing all to run.
all.dart
import 'details/script1.dart' as script1;
import 'details/script2.dart' as script2;
main() {
script1.main();
script2.main();
}
script1.dart
main() => print('script1 run');
script2.dart
main() => print('script2 run');
So, this is working and I see the print statements expected when running all.dart but I have two issues.
First, I have to softlink packages under details. Apparently pub does not propagate packages softlinks down to subfolders. Is this expected or is there a workaround?
Second, there are errors flagged in all.dart at the point of the second import statement. The analyzer error is:
The imported libraries 'script1.dart' and 'script2.dart' should not have the same name ''
So my guess is since I'm importing other scripts as if they are libraries and since they do not have the library script[12]; statement at the top they both have the same name - the empty name?
Note: Originally I had all of these under lib and I could run them as scripts specifying a suitable --package-root on the command line even though they were libraries with main. But then to debug I need to run in Dart Editor, which is why I'm moving them to bin. Perhaps the editor should allow libraries under lib with a main to be run as a script since they run outside the editor just fine? The actual differences between script/library seems a bit unnecessary (as other scripting languages allow files to be both).
How do I clean this up?
I'm not sure what the actual question is.
If a library has not library statement then the empty string is used as a name.
Just add a library statement with an unique name to fix this.
Adding symlinks to subdirectories solves the problem with the imports for scripts in subdirectories.
I do this regularily.
It was mentioned several times in dartbug.com that symlinks should go away entirely but I have no idea how long this will take.
I have never tried to put script files with a main in lib but it is just against the package layout conventions and I guess this is why DartEditor doesn't support it.

Resources