VS Code Remote Container unable to get local issuer certificate - docker

VSCode Version:
1.62.2
Local OS Version:
Windows 10.0.18363
Reproduces in: Remote - Containers
Name of Dev Container Definition with Issue:
/vscode/devcontainers/typescript-node
In our company we use a proxy which terminates the SSL connections. When I now try to start any devcontainer (the workspace is in the WSL2 filesystem), I get the following error message:
Installing VS Code Server for commit 3a6960b964327f0e3882ce18fcebd07ed191b316
[2021-11-12T17:01:44.400Z] Start: Downloading VS Code Server
[2021-11-12T17:01:44.400Z] 3a6960b964327f0e3882ce18fcebd07ed191b316 linux-x64 stable
[2021-11-12T17:01:44.481Z] Stop (81 ms): Downloading VS Code Server
[2021-11-12T17:01:44.499Z] Error: unable to get local issuer certificate
at TLSSocket.onConnectSecure (_tls_wrap.js:1497:34)
at TLSSocket.emit (events.js:315:20)
at TLSSocket._finishInit (_tls_wrap.js:932:8)
at TLSWrap.ssl.onhandshakedone (_tls_wrap.js:706:12)
In the dockerfile I copy the company certificates and update them:
ADD ./certs /usr/local/share/ca-certificates
RUN update-ca-certificates 2>/dev/null
The proxy environment variables are also set correctly. Out of desperation I also tried to disable the certificate check for wget:
RUN su node -c "echo check_certificate=off >> ~/.wgetrc"
Even in the devcontainer configuration I have disabled the proxy and the security check for VS code via the settings:
// Set *default* container specific settings.json values on container create.
"settings": {
"http.proxy": "http://<proxy.url>:8080",
"http.proxyStrictSSL": false
},
I have tried many other things, like setting NODE_TLS_REJECT_UNAUTHORIZED=0 as env variable inside the dockerfile, unfortunately without any success. Outside the company network, without the proxy, it works wonderfully.
Maybe one of you has an idea how I can solve this problem?

A working if not so nice solution to the problem is to add HTTPS exceptions for the following domains:
https://update.code.visualstudio.com
https://az764295.vo.msecnd.net
A list of common hostnames can be found here:
https://code.visualstudio.com/docs/setup/network

Related

Install Keycloak adapter on WILDFLY that depends on ENVs in standalone.xml

I am trying to install the Keycloak Adapter to my WILDFLY Application Server that runs as Docker Container. I am using the image jboss/wildfly:17.0.0.Final as base image. I am having a big trouble while building my actual own image.
My Dockerfile:
FROM jboss/wildfly:17.0.0.Final
ENV $WILDFLY_HOME /opt/jboss/wildfly
COPY keycloak-adapter.zip $WILDFLY_HOME
RUN unzip $WILDFLY_HOME/keycloak-adapter.zip -d $WILDFLY_HOME
# My standalone.xml that contains ENVs
COPY standalone.xml $WILDFLY_HOME/standalone/configuration/
# Here it crashes!
RUN $WILDFLY_HOME/bin/jboss-cli.sh --file=$WILDFLY_HOME/bin/adapter-elytron-install-offline.cli
The official documentation says:
Unzip the adapter zip file in $WILDFLY_HOME (/opt/jboss/wildfly) - I've done this, works.
In order to install the adapter (when server is offline) you need to execute ./bin/jboss-cli.sh --file=bin/adapter-elytron-install-offline.cli which basically starts the server (which is needed as you cant modify the configuration otherwise) and modifies the standalone.xml.
Here is the problem. My standalone.xml is parametrized with environment variables that are only set during runtime as it runs in multiple different environments. When the ENVs are not set, the server crashes and so does the command above.
The error during docker build at the last step:
Cannot start embedded server WFLYEMB0021: Cannot start embedded process: JBTHR00005: Operation failed WFLYSRV0056: Server boot has fialed in an unrecoverable manner.
The cause
Despite of the not very precise error message I have clearly identified the unset ENVs as the cause by running the container with bash, setting the required ENVs with some random values and executing the jboss-cli command again - and it worked.
I know that the docs say its also possible to configure when the server is running but this is not an option for me, i need this configured at docker build stage.
So the problem here is they provide an offline installation that fails if the standalone.xml depends on environment variables which are usually not set during docker build. Unfortunately, i could not find a way to tell the jboss cli to ignore unset environment variables.
Do you know any workaround?

CircleCI CLI Error on Ubuntu WSL: open .circleci/config.yml: no such file or directory

I'm trying to get the CircleCI CLI tool ( https://circleci.com/docs/2.0/local-jobs/ ) working on Ubuntu WSL on Windows 10. It appeared to install successfully -- and the file permissions appear to be correct. I have Docker for Windows installed and running, and the Linux Docker client works without issue.
But now it always errors when trying to validate a CircleCI config file.
I have tried:
circleci config validate -c .circleci/config.yml
and
circleci config validate
from the root of my repo.
But each time, it gives the error:
Error: open .circleci/config.yml: no such file or directory
Has anyone been able to get this work?
sudo worked for me to overcome this. However, I stuck with the next error.

Cannot start fabric-ca server natively

I have been following this for setting up a fabric-ca server in my network of 2 organizations, 4 peers (2 in each).
I have two questions:
In the documentation, it says that we can start server locally. When I try to do the same, I'm not able to do it and getting the following error :
fabric-ca-server: command not found . So I tried using a docker image and the server now works as a docker image.
Now when I try to run the fabric-ca-client command, it cannot find the client configuration in the fabric-ca-client home. The FABRIC_CA_HOME environment variable is set as `/etc/hyperledger/fabric-ca-server' in the container. I'm confused as to what I might be missing here.
If you followed the instructions, then the fabric-ca-server executable will be under $GOPATH/bin, you will need to add this to your PATH, via export PATH=$PATH:$GOPATH/bin. Remember to also set FABRIC_CA_HOME.
Assuming you're also using the client natively, it should also be under $GOPATH/bin. In a separate terminal, set FABRIC_CA_HOME to a different path. Then you can enroll the admin user, for example: fabric-ca-client enroll -u http://admin:password#localhost:7054.
The issue is coming because you have not set the GOPATH path.
After cloning the CA repo set the GOPATH to the given directory.
For setting up GOPATH:
(Ubuntu)
If you don’t set a GOPATH, the default will be used.
You have to add $GOPATH/bin to your PATH to execute any binary installed in $GOPATH/bin, or you need to type $GOPATH/bin/the-command.
Add this to your ~/.bash_profile
export PATH=$GOPATH/bin:$PATH
Current GOPATH command:
go env GOPATH
Changing the GOPATH command:
export GOPATH=$HOME/your-desired-path
So. change the 'your-desired-path' to your Fabric-CA repo directory.
You will be facing issue with certain versions of golang. Set the flags explicitly using CGO_LDFLAGS_ALLOW='-Wl,--no-as-needed'
Reference -
Error while running make command using Fabric 1.0.6 after all the 15 steps
https://github.com/golang/go/issues/23739
➜ fabric-ca git:(release-1.0) go get -u github.com/hyperledger/fabric-ca/cmd/...
go build github.com/hyperledger/fabric-ca/vendor/github.com/miekg/pkcs11: invalid flag in #cgo LDFLAGS: -I/usr/local/share/libtool
➜ fabric-ca git:(release-1.0) export CGO_LDFLAGS_ALLOW='-Wl,--no-as-needed'
➜ fabric-ca git:(release-1.0) make

boot2docker certificate error

I download the boot2docker#1.7.1 and install it through the package wizard. But when I try to run it, it throws me this error:
An error occurred trying to connect: Post https://192.168.59.103:2376/v1.19/containers/create: remote error: bad certificate
I tried with all this options, but the issue remains.
You could try running boot2docker shellinit to set up your certificates and print some commands to be executed before you run anything using the docker command.
On a Mac it would look similar to:
Writing /Users/xyz/.boot2docker/certs/boot2docker-vm/ca.pem
Writing /Users/xyz/.boot2docker/certs/boot2docker-vm/cert.pem
Writing /Users/xyz/.boot2docker/certs/boot2docker-vm/key.pem
export DOCKER_HOST=tcp://192.168.59.103:2376
export DOCKER_CERT_PATH=/Users/xyz/.boot2docker/certs/boot2docker-vm
export DOCKER_TLS_VERIFY=1
Here you could simply run $(boot2docker shellinit) to set up everything properly.
On Windows you will have some SET commands to issue instead of those export commands. For further information for Windows users please refer to the boot2docker documentation.

Jenkins Fail with: Host key verification failed

I downloaded and installed Jenkins for Mac OSX on my Macbook Pro (OS: Mountain Lion). I now want to set it up to pull down a project from bitbucket and do an automatic build.
I created the ssh key, added it to bitbucket and tried to setup a build job. However, I get the error:
Failed to connect to repository : Command "git ls-remote -h HEAD" returned status code 128:
stdout:
stderr: Host key verification failed.
fatal: The remote end hung up unexpectedly
I tried to remove the domain causing the problem from known_hosts but am still getting this error.
Please advise.
I think I've found a possible solution in this post: http://colonelpanic.net/2011/06/jenkins-on-mac-os-x-git-w-ssh-public-key/
Jenkins on Mac OS X I just finished setting up a build server on Mac
OS X using Jenkins (formerly Hudson). The company I’m working for
(GradeCam) uses git and gitolite for our source control and so I
expected no trouble using Jenkins to build our tools using the git
plugin.
However, I quickly ran into a snag: the source control server is on a
public address and so our source code is not available except via ssh,
and gitolite ssh access uses private key authentication. Well, I’m an
experience unix sysadmin, so that didn’t sound like a big issue —
after all, setting up public key authentication is childs play, right?
Default install
The default installation of Jenkins on Mac OS X (at the time of this
writing) installs a Launch Agent plist to
/Library/LaunchAgents/org.jenkins-ci.plist. This plist file causes
Jenkins to load as user “daemon”, which sounds fine — except that the
home directory for the “daemon” user is /var/root, same as for user
root. This means that the .ssh dir in there will never have the right
permissions for a private key to be used.
Creating a new hidden user
My solution was to create a new “hidden” user for Jenkins to run
under. Following instructions I found on a blog post, I created a user
“jenkins” with a home directory “/Users/Shared/Jenkins/Home”:
sudo dscl . create /Users/jenkins
sudo dscl . create /Users/jenkins PrimaryGroupID 1
sudo dscl . create /Users/jenkins UniqueID 300
sudo dscl . create /Users/jenkins UserShell /bin/bash
sudo dscl . passwd /Users/jenkins $PASSWORD
sudo dscl . create /Users/jenkins home /Users/Shared/Jenkins/Home/
I then stopped Jenkins: “sudo launchctl unload -w
/Library/LaunchAgents/org.jenkins-ci.plist” and edited the plist file
to set the username to jenkins instead of daemon.
“chown -R jenkins: /Users/Shared/Jenkins/Home”
sets the permissions how they need to be, and then “sudo launchctl
load -w /Library/LaunchAgents/org.jenkins-ci.plist” should get you up
and running!
To get git over ssh running, “sudo su – jenkins” to get a console as
the jenkins user and set up the ssh keys and such. Make sure you can
ssh to where you want to go (or even do a test git clone) because you
need to save the keys so it doesn’t ask for them when jenkins tries to
do the clone.
That should do you! Hope it helps someone.

Resources