I have an IIS site with NTLM and sitespeed.io, run via Docker, seems to be unable to get past the NTLM Part.
I'm very new to sitespeed.io but have searched their documentation and found nothing to say it does, or does not, specifically support NTLM.
The script I have been running on a Windows 10 machine is:
docker run --rm -v "%cd%":/sitespeed.io sitespeedio/sitespeed.io http://intranet.company.com/Pages/default.aspx#/
The configuration docs indicate that only Basic auth is supported, and this issue comment confirms that NTLM is not supported. (Generally, I'd assume that if something's docs don't affirmatively say the tool supports NTLM auth, it probably doesn't support NTLM.)
You'll need to disable auth entirely, enable Basic auth on IIS, or route your requests through a NTLM proxy (which is not ideal since it necessarily affacts the timings you're trying to measure).
Related
I am using the e-signature Java SDK for the application that I developed.
The application will run on a docker container and the container on a Linux server.
There is a proxy configured on this server and I have been asked if there is anything that they have to configure regarding DocuSign integration.
This answer on GitHub says that SDK would automatically pick up the proxy settings of the system.
What happens on my case. Will it pick the server or the container settings. Should I manually set the proxy settings in code?
Unfortunately I do not have access to the system (or to any similar system) so it is not possible to test the application.
The answer you linked to (https://github.com/docusign/docusign-esign-java-client/issues/152#issuecomment-653926077) talked about an enhancement request that will enable a specific ApiClient with its own proxy for the Java SDK.
You do need to update the proxy settings in your code if you know what they are.
I have tried dependabot-script with Azure devops and had no big hurdles (I noticed Dependabot throws error repo not found when the I used user access token rather than system access token in ADO), but now trying with enterprise Bitbucket server I only see this error.
Has anyone experienced this error?
docker run --rm -v "$(pwd):/home/dependabot/dependabot-script" -w /home/dependabot/dependabot-script -e BITBUCKET_ACCESS_TOKEN=$BITBUCKET_ACCESS_TOKEN -e GITHUB_ACCESS_TOKEN=$GITHUB_ACCESS_TOKEN -e PACKAGE_MANAGER=npm_and_yarn -e PROJECT_PATH=projects/project_name/repos/repo_name bundle exec ruby ./generic-update-script.rb
Error
/home/dependabot/dependabot-script/vendor/ruby/2.7.0/gems/dependabot-common-0.142.0/lib/dependabot/clients/bitbucket.rb:170:in `Clients::Bitbucket::NotFound)
I think the problem is in:
PROJECT_PATH=projects/project_name/repos/repo_name
You have to use
PROJECT_PATH=project_name/repo_name
At the moment what you try to achieve seems not be implemented in dependabot.
I guess by the code at dependabot-core that bitbucket enterprise (by which I mean bitbucket installed in your company and not cloud) is not supported.
Right at the bottom of the file it reads
def base_url
# TODO: Make this configurable when we support enterprise Bitbucket
"https://api.bitbucket.org/2.0/repositories/"
end
Unfortunately I did not find further hints if this is really true.
The description of dependabot-script implies that you can set an api url and hostname via BITBUCKET_API_URL and BITBUCKET_HOSTNAME. The defaults there (api 2.0 and bitbucket.org show that they default to the bit bucket cloud API which I believe differs from the enterprise API (at least by the version).
I even tried some of the URLs that are assembled in the dependabot code and half of them work alright on the enterprise bitbucket and some don't (for example ../pullrequests which is used in the code does not work for me because the correct URL would be ../pull-requests on enterprise bitbucket)
I also checked with wireshark since I also tried to get this working and found that dependabot-script does send requests to bitbucket.org but not my enterprise bitbucket even when I set BITBUCKET_API_URL and BITBUCKET_HOSTNAME.
I'm trying to setup Varnish (varnish-4.0.5 revision 07eff4c29) using Docker on Plesk. This all seems to work just fine as I'm seeing HITS. The last hurdle I have to take is accessing Varnish from outside the container to clear the cache using our CMS. On another server I can access Varnish just fine, but this is without Plesk and Varnish version is 3.
To try and access I've tried several things. From a terminal on the server I telnet into the docker container as follows:
telnet 172.17.0.3 6082
To which Varnish responds:
107 59
mrvwpbwcqkmesncevpdnuvfhssasmtob
Given secret key a63b28f6-4346-4049-ee48-4942e8f00be1 I reply:
auth 59886f05927b7d4aa25ef7665c2895b29e8ccd4605ceeb3d98a511675bcd65ad
I have tried to respond with every combination of "challenge 0x0a secret 0x0a challenge 0x0a" using a sha256 hash. But I cannot seem to be authenticated. I did verify I'm getting the same sha256 hash as the varnish examples of 3.0 documentation have, so I think I'm at least getting the correct hash for whatever information I test in.
How can I best debug this?
I'm doubting the secret I'm using. It is in /etc/varnish/secret (inside the container) but I'm not sure this is the actual file or setting Varnish looks at, even though varnish.params says: VARNISH_SECRET_FILE=/etc/varnish/secret
I did find a related problem where it is suggested I should use varnishadm as the client (https://varnish-cache.org/lists/pipermail/varnish-misc/2015-August/024492.html). But varnishadm is not installed on the server an I have no permissions to install it either.
I have been following the tutorials which are provided in Hyperledger Composer docs, but I am not getting the results that they are supposed to I should be getting. Specifically, when I try to enable the multiple user mode for the REST server and I try to call one of the business network REST API operations using the REST API explorer I always get a HTTP 401 Authorization Required. According to the Tutorial, I if get this error is due to I have not authenticated correctly to the REST API, but it does not mention why this error occurs or how I may fix it.
It is very important for application development to be able to authenticate each user who wants to make requests to the API.
What version of Hyperledger Composer are you using?
The tutorial/document you refer to is correct for v0.15.0 and works a little differently for prior versions.
Are you seeing an Access token at the top of the Browser Window - this indicates that you have successfully authenticated and can continue with the steps for the Wallet.
If you are not seeing an Access token displayed, then make sure you hit your REST server with a URL similar to http://localhost:3000/auth/github again and login.
If you are still experiencing problems I would suggest going back to just using authentication without multiuser mode and verify that the authentication works properly from there.
After some research, I found a solution and worked for me.
If you already enable Github authentication then ignore. Otherwise first enable authentication following this tutorial Enaling Authentication.
Before start rest server you will export your admin card from the network by using this command:
composer card export -n admin#sample-network -f admincard.card
Now start rest server with authentication using this command:
composer-rest-server -c admin#sample-network -p 3000 -a true -m true
After some time rest server will start.
Now First, go this link for authentication: http://localhost:3000/auth/github
After successful authentication, you will get an access token and also you will see a Wallet options below.
Now you need to import a card that you already export from your network.
That's it, you can able to add anything to your network.
In my case, I have two missed steps:
enabling authentication for the REST server
https://hyperledger.github.io/composer/v0.19/integrating/enabling-rest-authentication.html
composer-rest-server -c admin#you_project -a true, I just execute composer-rest-server but did not specify the identity "admin#you_project" before.
We want to migrate our custom steps from XAML build to new build task in TFS2015 on-premise. I installed NodeJS and tfx-cli but when tfx-cli want to connect to TFS I need to provide pat (personal access token) but I cannot find where I can get it. All samples is for VSO but not for on-premise TFS2015. Is it possible to get PAT from on-premise TFS2015?
TFS 2015 doesn't support Personal Access Tokens, this feature was introduced with TFS 2017. In the mean time you'll either need to configure basic auth and use that (only enable basic auth if your TFS server is running over SSL), Or use the trick below to trick the command lien tools to authenticate by lettign an NTLM proxy (like Fiddler) handle the auth for you.
If you do not want to configure Basic Authentication on your TFS server (which many people don't want due to security concerns), then you can use a neat trick to let Fiddler handle your authentication:
Then enter:
C:\>set http_proxy=http://localhost:8888
C:\>tfx login --auth-type basic --service-url http://jessehouwing:8080/tfs/DefaultCollection
You'll be prompted for a username and a password, it doesn't really matter what you enter, fiddler will handle the authentication for you in the background:
More detailed steps outlined on my blog.
If you're battling self-signed certificates, which is also a common problem when using tfx against a on-premise TFS server, make sure you're using a recent enough version of Node and point it to an aditional cert store using environment variables:
As of Node.js 7.3.0 (and the LTS versions 6.10.0 and 4.8.0) it is now possible to add extra well-known certificates to Node.js with an environment variable. This can be useful in cloud or other deployment environments to add trusted certificates as a matter of policy (as opposed to explicit coding), or on personal machines, for example, to add the CAs for proxy servers.
See the CLI documentation for more information on using NODE_EXTRA_CA_CERTS, as well as the original pull-request.
NODE_EXTRA_CA_CERTS=file#
Added in: v7.3.0
When set, the well known "root" CAs (like VeriSign) will be extended with the extra certificates in file. The file should consist of one or more trusted certificates in PEM format. A message will be emitted (once) with process.emitWarning() if the file is missing or malformed, but any errors are otherwise ignored.
Note that neither the well known nor extra certificates are used when the ca options property is explicitly specified for a TLS or HTTPS client or server.
There's another option for tfx-cli to connect to the TFS instance, and it is basic authentication. Just use the following format:
tfx login --auth-type basic --username myuser --password mypassword --service-url http://tfscollectionurl
Here is the quote from Github:
You can alternatively use basic auth by passing --auth-type basic
(read Configuring Basic Auth). NTLM will come soon.
Note: Using this feature will store your login credentials on disk in
plain text.