Jupyter: XSRF cookie does not match POST - docker

I am trying to transfert files using a python program running on a local Anaconda to a local Jupyter within a docker container using the Jupyter rest API.
I managed already to execute a requests.get() succesfully after muddling-through a bit on how to input the token.
Now I would like now to execute a requests.post() command to transfert the files.
Configuration:
local docker container running on docker toolbox for windows
docker version 17.04.0-ce, build 4845c56
tensorflow/tensorflow incl. Jupyter latest version install
jupyter_kernel_gateway==0.3.1
local Anaconda v. 4.3.14 running on a windows 10 machine
Code:
token = token_code_provided_by_jupyter_at_startup
api_url = "http://192.168.99.100:8888/api/contents"
# getting the file's data from disk and converting into a json file
cwd = os.getcwd()
file_location = cwd+r'\Resources\Test\test_post.py'
payload = open(file_location, 'r').read()
b64payload = base64.encodestring(payload)
body = json.dumps({
'content':b64payload,
'name': 'test_post.py',
'path': '/api/contents/',
'format': 'base64',
'type':'file'
})
# getting the xsrf cookie
client = requests.session()
client.get('http://192.168.99.100:8888/')
csrftoken = client.cookies['_xsrf']
headers ={'Content-type': 'application/json', 'X-CSRFToken':csrftoken, 'Referer':'http://192.168.99.100:8888/api/contents', 'token':token}
response = requests.post(api_url, data=body, headers=headers, verify=True)
Error returned
[W 12:22:36.710 NotebookApp] 403 POST /api/contents (192.168.99.1): XSRF cookie does not match POST argument
[W 12:22:36.713 NotebookApp] 403 POST /api/contents (192.168.99.1) 4.17ms referer=http://192.168.99.100:8888/api/contents

My solution is inspired by #SaintNazaire. In my Chrome browser, I opened the cookie folder and found the repeated _xsrf items in Cookies. I removed all of them and refreshed the Jupyter, and then everything went well.

Actually there is no need for xsrf cookie when using header token for authentification.
headers = {'Authorization': 'token ' + token}
Reference is made to the Jupyter notebook documentation.
http://jupyter-notebook.readthedocs.io/en/latest/security.html

Related

Erlang :ssh authentication error. How to connect to ssh using identity file

I'm getting an authentication error when trying to connect ssh host.
The goal is to connect to the host using local forwarding. The command below is an example using drop bear ssh client to connect to host with local forwarding.
dbclient -N -i /opt/private-key-rsa.dropbear -L 2002:1.2.3.4:2006 -p 2002 -l
test_user 11.22.33.44
I have this code so far which returns empty connection
ip = "11.22.33.44"
user = "test_user"
port = 2002
ssh_config = [
user_interaction: false,
silently_accept_hosts: true,
user: String.to_charlist(user),
user_dir: String.to_charlist("/opt/")
]
# returns aunthentication error
{:ok, conn} = :ssh.connect(String.to_charlist(ip), port, ssh_config)
This is the error Im seeing
Server: 'SSH-2.0-OpenSSH_5.2'
Disconnects with code = 14 [RFC4253 11.1]: Unable to connect using the available authentication methods
State = {userauth,client}
Module = ssh_connection_handler, Line = 893.
Details:
User auth failed for: "test_user"
I'm a newbie to elixir and have been reading this erlang ssh document for 2 days. I did not find any examples in the documentation which makes it difficult to understand.
You are using non-default key name, private-key-rsa.dropbear. Erlang by default looks for this set of names:
From ssh module docs:
Optional: one or more User's private key(s) in case of publickey authorization. The default files are
id_dsa and id_dsa.pub
id_rsa and id_rsa.pub
id_ecdsa and id_ecdsa.pub`
To verify this is a reason, try renaming private-key-rsa.dropbear to id_rsa. If this works, the next step would be to add a key_cb callback to the ssh_config which should return the correct key file name.
One example implementation of a similar feature is labzero/ssh_client_key_api.
The solution was to convert dropbear key to ssh key. I have used this link as reference.
Here is the command to convert dropbear key to ssh key
/usr/lib/dropbear/dropbearconvert dropbear openssh /opt/private-key-rsa.dropbear /opt/id_rsa

Hyperledger Sawtooth - Preflight error while submitting transaction

I am trying to submit a transaction to Hyperledger Sawtooth v1.0.1 using javascript to a validator running on localhost. The code for the post request is as below:
request.post({
url: constants.API_URL + '/batches',
body: batchListBytes,
headers: { 'Content-Type': 'application/octet-stream' }
}, (err, response) => {
if (err) {
console.log(err);
return cb(err)
}
console.log(response.body);
return cb(null, response.body);
});
The transaction gets processed when submitted from an backend nodejs application, but it returns an OPTIONS http://localhost:8080/batches 405 (Method Not Allowed) error when submitted from client. These are the options that I have tried:
Inject Access-Control-Allow-* headers into the response using an extension: The response still gives the same error
Remove the custom header to bypass preflight request: This makes the validator throw an error as shown:
...
sawtooth-rest-api-default | KeyError: "Key not found: 'Content-Type'"
sawtooth-rest-api-default | [2018-03-15 08:07:37.670 ERROR web_protocol] Error handling request
sawtooth-rest-api-default | Traceback (most recent call last):
...
The unmodified POST request from the browser gets the following response headers from the validator:
HTTP/1.1 405 Method Not Allowed
Content-Type: text/plain; charset=utf-8
Allow: GET,HEAD,POST
Content-Length: 23
Date: Thu, 15 Mar 2018 08:42:01 GMT
Server: Python/3.5 aiohttp/2.3.2
So, I guess OPTIONS method is not handled in the validator. A GET request for the state goes through fine when the CORS headers are added. This issue was also not faced in Sawtooth v0.8.
I am using docker to start the validator, and the commands to start it are a slightly modified version of those given in the LinuxFoundationX: LFS171x course. The relevant commands are below:
bash -c \"\
sawadm keygen && \
sawtooth keygen my_key && \
sawset genesis -k /root/.sawtooth/keys/my_key.priv && \
sawadm genesis config-genesis.batch && \
sawtooth-validator -vv \
--endpoint tcp://validator:8800 \
--bind component:tcp://eth0:4004 \
--bind network:tcp://eth0:8800
Can someone please guide me as to how to solve this problem?
CORS issues are always the best.
What is CORS?
Your browser trying to protect users from bring directed to a page they think is the frontend for an API, but is actually fraudulent. Anytime a web page tries to access an API on a different domain, that API will need to explicitly give the webpage permission, or the browser will block the request. This is why you can query the API from Node.js (no browser), and can put the REST API address directly into your address bar (same domain). However, trying to go from localhost:3000 to localhost:8008 or from file://path/to/your/index.html to localhost:8008 is going to get blocked.
Why doesn't the Sawtooth REST API handle OPTIONS requests?
The Sawtooth REST API does not know the domain you are going to run your web page from, so it can't whitelist it explicitly. It is possible to whitelist all domains, but this obviously destroys any protection CORS might give you. Rather than try to weigh the costs and benefits of this approach for all Sawtooth users everywhere, the decision was made to make the REST API as lightweight and security agnostic as possible. Any developer using it would be expected to put it behind a proxy server, and they can make whatever security decisions they need on that proxy layer.
So how do you fix it?
You need to setup a proxy server that will put the REST API and your web page on the same domain. There is no quick configuration option for this. You will have to set up an actual server. Obviously there are lots of ways to do this. If you are already familiar with Node, you could serve the page from Node.js, and then have the Node server proxy the API calls. If you are already running all of the Sawtooth components with docker-compose though, it might be easier to use Docker and Apache.
Setting up an Apache Proxy with Docker
Create your Dockerfile
In the same directory as your web app create a text file called "Dockerfile" (no extension). Then make it look like this:
FROM httpd:2.4
RUN echo "\
LoadModule proxy_module modules/mod_proxy.so\n\
LoadModule proxy_http_module modules/mod_proxy_http.so\n\
ProxyPass /api http://rest-api:8008\n\
ProxyPassReverse /api http://rest-api:8008\n\
RequestHeader set X-Forwarded-Path \"/api\"\n\
" >>/usr/local/apache2/conf/httpd.conf
This is going to do a couple of things. First it will pull down the httpd module from DockerHub, which is just a simple static server. Then we are using a bit of bash to add five lines to Apache's configuration file. These five lines import the proxy modules, tell Apache that we want to proxy http://rest-api:8008 to the /api route, and set the X-Forwarded-Path header so the REST API can properly build response URLs. Make sure that rest-api matches the actual name of the Sawtooth REST API service in your docker compose file.
Modify your docker compose file
Now, to the docker compose YAML file you are running Sawtooth through, you want to add a new property under the services key:
services:
my-web-page:
build: ./path/to/web/dir/
image: my-web-page
container_name: my-web-page
volumes:
- ./path/to/web/dir/public/:/usr/local/apache2/htdocs/
expose:
- 80
ports:
- '8000:80'
depends_on:
- rest-api
This will build your Dockerfile located at ./path/to/web/dir/Dockerfile (relative to the docker compose file), and run it with its default command, which is to start up Apache. Apache will serve whatever files are located in /usr/local/apache2/htdocs/, so we'll use volumes to link the path to your web files on your host machine (i.e. ./path/to/web/dir/public/), to that directory in the container. This is basically an alias, so if you update your web app later, you don't need to restart this docker container to see the changes. Finally, ports will take the server, which is at port 80 inside the container, and forward it out to localhost:8000.
Running it all
Now you should be able to run:
docker-compose -f path/to/your/compose-file.yaml up
And it will start up your Apache server along with the Sawtooth REST API and validator and any other services you defined. If you go to http://localhost:8000, you should see your web page, and if you go to http://localhost:8000/api/blocks, you should see a JSON representation of the blocks on chain. More importantly you should be able to make the request from your web app:
request.post({
url: 'api/batches',
body: batchListBytes,
headers: { 'Content-Type': 'application/octet-stream' }
}, (err, response) => console.log(response) );
Whew. Sorry for the long response, but I'm not sure if it is possible to solve CORS any faster. Hopefully this helps.
The transaction Header should have details like, address of the block where it would be save. Here is example which I have used and is working fine for me :
String payload = "create,0001,BLockchain CPU,Black,5000";
logger.info("Sending payload as - "+ payload);
String payloadBytes = Utils.hash512(payload.getBytes()); // --fix for invaluid payload seriqalization
ByteString payloadByteString = ByteString.copyFrom(payload.getBytes());
String address = getAddress(IDEM, ITEM_ID); // get unique address for input and output
logger.info("Sending address as - "+ address);
TransactionHeader txnHeader = TransactionHeader.newBuilder().clearBatcherPublicKey()
.setBatcherPublicKey(publicKeyHex)
.setFamilyName(IDEM) // Idem Family
.setFamilyVersion(VER)
.addInputs(address)
.setNonce("1")
.addOutputs(address)
.setPayloadSha512(payloadBytes)
.setSignerPublicKey(publicKeyHex)
.build();
ByteString txnHeaderBytes = txnHeader.toByteString();
byte[] txnHeaderSignature = privateKey.signMessage(txnHeaderBytes.toString()).getBytes();
String value = Signing.sign(privateKey, txnHeader.toByteArray());
Transaction txn = Transaction.newBuilder().setHeader(txnHeaderBytes).setPayload(payloadByteString)
.setHeaderSignature(value).build();
BatchHeader batchHeader = BatchHeader.newBuilder().clearSignerPublicKey().setSignerPublicKey(publicKeyHex)
.addTransactionIds(txn.getHeaderSignature()).build();
ByteString batchHeaderBytes = batchHeader.toByteString();
byte[] batchHeaderSignature = privateKey.signMessage(batchHeaderBytes.toString()).getBytes();
String value_batch = Signing.sign(privateKey, batchHeader.toByteArray());
Batch batch = Batch.newBuilder()
.setHeader(batchHeaderBytes)
.setHeaderSignature(value_batch)
.setTrace(true)
.addTransactions(txn)
.build();
BatchList batchList = BatchList.newBuilder()
.addBatches(batch)
.build();
ByteString batchBytes = batchList.toByteString();
String serverResponse = Unirest.post("http://localhost:8008/batches")
.header("Content-Type", "application/octet-stream")
.body(batchBytes.toByteArray())
.asString()
.getBody();

Windows Etsy: Peer certificate cannot be authenticated with given CA certificates

In an effort to be OAuth'd with Etsy, I have tried countless solutions in C# to at least start the authentication process (ie get the login URL):
eg
mashery.com, http://term.ie/oauth/example/client.php and question #8321034
but the response is always the same:
oauth_problem=signature_invalid&debug_sbs=GET&https%3A%2F%2Fopenapi.etsy.com%2Fv2%2Foauth%2Frequest_token&oauth_consumer_key%3D...my-consumer-key...%26oauth_nonce%3D2de91e1361d1906bbae04b15f42ab38d%26oauth_signature_method%3DHMAC-SHA1%26oauth_timestamp%3D1502362164%26oauth_version%3D1.0%26scope%3Dlistings_w%2520listings_r
and so I'm resorting to the dreaded world of PHP...
On my machine, I've installed the following (Windows 10):
XAMPP (xampp-win32-7.1.7-0-VC14-installer) with default options
JDK (jdk-8u144-windows-i586)
JRE (jre-8u144-windows-i586)
php_oauth.dll ([php_oauth-2.0.2-7.1-ts-vc14-x86.zip][4]) and copying it to C:\xampp\php\ext
[cacert.pem][4], (dated Jun 7 03:12:05 2017) and coping it to the following directories:
C:\xampp\perl\vendor\lib\Mozilla\CA
C:\xampp\phpMyAdmin\vendor\guzzle\guzzle\src\Guzzle\Http\Resources
Apache and Tomcat would not run to begin with from XAMPP because it said that ports 443 and 80 were being used/blocked and so I duly changed these to 444 and 122 in
C:\xampp\apache\conf\extra\httpd-ssl.conf
C:\xampp\apache\conf\httpd.conf
All good so far but when I run the following script in my browser (http://localhost:444/dashboard/etsy.php):
<?php
$base_uri = 'https://openapi.etsy.com';
$api_key = 'my-etsy-api-key';
$secret = 'my-etsy-api-secret';
$oauth = new OAuth($api_key, $secret, OAUTH_SIG_METHOD_HMACSHA1, OAUTH_AUTH_TYPE_URI);
$req_token = $oauth->getRequestToken($base_uri .= "/v2/oauth/request_token?scope=listings_w%20transactions_r", 'oob');
$login_url = $req_token['login_url'];
print "Please log in and allow access: $login_url \n\n";
$verifier = readline("Please enter verifier: ");
$verifier = trim($verifier);
$oauth->setToken($req_token['oauth_token'], $req_token['oauth_token_secret']);
$acc_token = $oauth->getAccessToken($base_uri .= "/v2/oauth/access_token", null, $verifier);
$oauth_token = $acc_token['oauth_token'];
$oauth_token_secret = $acc_token['oauth_token_secret'];
$oauth->setToken($oauth_token, $oauth_token_secret);
print "Token: $oauth_token \n\n";
print "Secret: $oauth_token_secret \n\n";
?>
I get the following error message:
Fatal error: Uncaught OAuthException: making the request failed (Peer
certificate cannot be authenticated with given CA certificates) in
C:\xampp\htdocs\dashboard\etsy.php:8 Stack trace: #0
C:\xampp\htdocs\dashboard\etsy.php(8):
OAuth->getRequestToken('https://openapi...', 'oob') #1 {main} thrown
in C:\xampp\htdocs\dashboard\etsy.php on line 8
I've tried running the script with each thread safe, x86 version of OAuth (http://windows.php.net/downloads/pecl/releases) - stop, restart Apache) but no luck.
I'm at my wits end.
How to I resolve this Peer certificate problem?
Simply disable the SSL on local.
$oauth->disableSSLChecks()
Oauth by default using CURL SSL Certificate. The simple way for local apache server is to disable it. Either configure the SSL for the CURL. It will also resolve the issue for oauth.
as per php documentation
we can set the certificate path simply.
$oauth->setCAPath("F:\xampp\php\extras\ssl\cacert.pem");
print_r($oauth->getCAPath());
You can also set the request engine to curl or php stream if the ssl is already configured.
Official PHP documentation

Oauth2 Cygnus and Cosmos Sink doesn't seems to work

Since the last update , i haven't been able to upload my data to Cosmos using Cygnus . I am aware that we now need to use Oauth2 token to do it . So i did the request for the token .
curl -k -X POST "https://cosmos.lab.fiware.org:13000/cosmos-auth/v1/token" -H "Content-Type: application/x-www-form-urlencoded" -d "grant_type=password&username=guillaume.jourdain#4planet.eu&password=XXXXX"
I get a token, but then i try to check the token :
curl -X GET "http://cosmos.lab.fiware.org:14000/webhdfs/v1/guillaume.jourdain/hostabee?op=liststatus&user.name=guillaume.jourdain#4planet.eu" -H "X-Auth-Token: TheToken"
and even this :
curl -X GET "http://cosmos.lab.fiware.org:14000/webhdfs/v1/guillaume.jourdain/hostabee?op=liststatus&user.name=guillaume.jourdain" -H "X-Auth-Token: TheToken"
And Everytime , for each of this command and for all the token I Tried i get this :
User token not authorized
Next i tried to put the oauth parameter in my cygnus conf file and this occured everytime :
2015-07-17 16:17:17,797 (lifecycleSupervisor-1-1) [INFO - es.tid.fiware.orionconnectors.cosmosinjector.hdfs.HttpFSBackend.createDir(HttpFSBackend.java:71)] HttpFS response: HTTP/1.1 401 Unauthorized
2015-07-17 16:17:17,798 (lifecycleSupervisor-1-1) [ERROR - es.tid.fiware.orionconnectors.cosmosinjector.OrionHDFSSink.start(OrionHDFSSink.java:108)] The directory could not be created in HDFS. HttpFS response: 401 Unauthorized
So yeah , for the moment i'm kinda stuck . Do you have any information for me to resolve this problem ?
EDIT :
Here's my Cygnus Configuration file , maybe the problem is located here
APACHE_FLUME_HOME/conf/cygnus.conf
orionagent.sources = http-source
orionagent.sinks = hdfs-sink
orionagent.channels = notifications
# Flume source, must not be changed
orionagent.sources.http-source.type = org.apache.flume.source.http.HTTPSource
# channel name where to write the notification events
orionagent.sources.http-source.channels = notifications
# listening port the Flume source will use for receiving incoming notifications
orionagent.sources.http-source.port = 5050
# Flume handler that will parse the notifications, must not be changed
orionagent.sources.http-source.handler = com.telefonica.iot.cygnus.handlers.OrionRestHandler
# regular expression for the orion version the notifications will have in their headers
orionagent.sources.http-source.handler.orion_version = 0\.23\.*
# URL target
orionagent.sources.http-source.handler.notification_target = /notify
# channel name from where to read notification events
orionagent.sinks.hdfs-sink.channel = notifications
# Flume sink that will process and persist in HDFS the notification events, must not be changed
orionagent.sinks.hdfs-sink.type = com.telefonica.iot.cygnus.sinks.OrionHDFSSink
# IP address of the Cosmos deployment where the notification events will be persisted
orionagent.sinks.hdfs-sink.cosmos_host = 130.206.80.46
# port of the Cosmos service listening for persistence operations; 14000 for httpfs, 50070 for webhdfs and free choice for inifinty
orionagent.sinks.hdfs-sink.cosmos_port = 14000
# username allowed to write in HDFS (/user/myusername)
orionagent.sinks.hdfs-sink.cosmos_username = guillaume.jourdain
# dataset where to persist the data (/user/myusername/mydataset)
orionagent.sinks.hdfs-sink.cosmos_password = XXXXX
orionagent.sinks.hdfs-sink.cosmos_dataset = hostABee
orionagent.sinks.hdfs-sink.attr_persistence = column
orionagent.sinks.hdfs-sink.hive_host = 130.206.80.46
orionagent.sinks.hdfs-sink.hive_port = 10000
orionagent.sinks.hdfs-sink.oauth2_token = TheTOKEN
# HDFS backend type (webhdfs, httpfs or infinity)
orionagent.sinks.hdfs-sink.hdfs_api = webhdfs
# channel name
orionagent.channels.notifications.type = memory
# capacity of the channel
orionagent.channels.notifications.capacity = 1000
# amount of bytes that can be sent per transaction
orionagent.channels.notifications.transactionCapacity = 100
Now I get this error (and others). The sink and the handlers does'nt seems to be found
2015-07-27 14:27:10,562 (conf-file-poller-0) [INFO - org.apache.flume.sink.DefaultSinkFactory.create(DefaultSinkFactory.java:40)] Creating instance of sink: hdfs-sink, type: com.telefonica.iot.cygnus.sinks.OrionHDFSSink
2015-07-27 14:27:10,562 (conf-file-poller-0) [ERROR - org.apache.flume.node.PollingPropertiesFileConfigurationProvider$FileWatcherRunnable.run(PollingPropertiesFileConfigurationProvider.java:142)] Failed to load configuration data. Exception follows.
org.apache.flume.FlumeException: Unable to load sink type: com.telefonica.iot.cygnus.sinks.OrionHDFSSink, class: com.telefonica.iot.cygnus.sinks.OrionHDFSSink
at org.apache.flume.sink.DefaultSinkFactory.getClass(DefaultSinkFactory.java:69)
at org.apache.flume.sink.DefaultSinkFactory.create(DefaultSinkFactory.java:41)
at org.apache.flume.node.AbstractConfigurationProvider.loadSinks(AbstractConfigurationProvider.java:415)
at org.apache.flume.node.AbstractConfigurationProvider.getConfiguration(AbstractConfigurationProvider.java:103)
at org.apache.flume.node.PollingPropertiesFileConfigurationProvider$FileWatcherRunnable.run(PollingPropertiesFileConfigurationProvider.java:140)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:351)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:178)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:165)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:267)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:701)
Caused by: java.lang.ClassNotFoundException: com.telefonica.iot.cygnus.sinks.OrionHDFSSink
at java.net.URLClassLoader$1.run(URLClassLoader.java:217)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:205)
at java.lang.ClassLoader.loadClass(ClassLoader.java:323)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:294)
at java.lang.ClassLoader.loadClass(ClassLoader.java:268)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:190)
at org.apache.flume.sink.DefaultSinkFactory.getClass(DefaultSinkFactory.java:67)
... 12 more
Thank you for reading .
Regarding the WebHDFS command for listing a HDFS folder:
curl -X GET "http://cosmos.lab.fiware.org:14000/webhdfs/v1/guillaume.jourdain/hostabee?op=liststatus&user.name=guillaume.jourdain#4planet.eu" -H "X-Auth-Token: TheToken"
The user.name should be user.name=guillaume.jourdain (without the #4planet.eu part).
Regarding Cygnus, have you upgraded to 0.8.2? It is the only Cygnus version supporting OAuth2. I guess you did not upgrade because of the es.tid.fiware.orionconnectors.cosmosinjector.OrionHDFSSink logs (those packages are previous to 0.8.0). You have all the details for upgrading here.

Difficulty in sourcing tcl files from sharepoint

I have tcl byte code on sharepoint with url like
https://share.abc.com/sites/abc/test.tcl
I want to source this file in another tcl file residing on my machine.
I don't want to copy the file from sharepoint.
Can anyone help me out here?
The source command only reads from the filesystem, but that can be a virtual filesystem. Thus, you can use the tclvfs package to make it so that HTTP sites can be mounted within the process, and then you can read from that.
# Add in HTTPS support
package require http
package require tls
::http::register https 443 ::tls::socket
# Mount the site; the vfs::urltype package won't work as it doesn't support https
package require vfs::http
# Double quotes only because of Stack Overflow highlighting sucking
vfs::http::Mount "https://share.abc.com/" /https.share.abc.com
# Load and evaluate the file
source /https.share.abc.com/sites/abc/test.tcl
This all assumes that you don't need any username/password credentials. If you do, you need to set them as part of the mount:
vfs::http::Mount "https://theuser:thepassword#share.abc.com/" /https.share.abc.com
Note that this currently requires that you're using HTTP Basic Auth (over HTTPS). That's sufficiently secure for almost any reasonable use.
This is quite a large stack of stuff. You can do it in rather less if you are willing to do some more of the work yourself:
package require base64
package require http
package require tls
::http::register https 443 ::tls::socket
proc source_https {url username password} {
set auth "Basic [base64::encode ${username}:${password}]"
set headers [list Authorization $auth]
set tok [http::geturl $url -headers $headers]
if {[http::ncode $tok] != 200} {
# Cheap and nasty version...
set msg [http::code $tok]
http::cleanup $tok
error "Problem with fetch: $msg"
}
set script [http::data $tok]
http::cleanup $tok
# These next two commands are effectively what [source] does (apart from I/O)
info script $url
uplevel 1 $script
}
source_https "https://share.abc.com/sites/abc/test.tcl" AzureDiamond hunter2

Resources