SQL Server on Kubernetes (aks) login for SA failed - docker

I am trying to deploy a custom SQL Server 2019 docker image from my docker hub repository to a kubernetes cluster (aks) but not able to log in to the DB instance from outside. It says login for user 'sa' failed.
I have verified the password requirements and literally tried using the same used in Microsoft docs but still can't log in to SQL Server.
Tried using sqlcmd and Azure Data Studio and I know it is reaching the server because the errorlog has the following error:
Error: 18456, Severity: 14, State: 8Login failed for user 'sa'. Reason: Password did not match that for the login provided.

Tested the same passwords, in my local docker environment and incidently, all of them gave the same error whenever I spun the container. After a few options, I used a simple password which worked locally and in k8s as well.
Have raised an SR with MS to understand password policy reqs. as to why some passwords didn't work. Event the one provided in their docs.
Thanks #Larnu and #Aaron Bertrand for your time and inputs.

Related

access azure key vault from azure web app where ip changes often bc of CI/CD

I have a docker container that accesses azure key vault. this works when I run it locally.
I set up an azure web app to host my container, and it cannot access the key vault
Forbidden (HTTP 403). Failed to complete operation. Message:
Client address is not authorized and caller is not a trusted service.
Client address: 51.142.174.224 Caller:
I followed the suggestion from https://www.youtube.com/watch?v=QIXbyInGXd8 and
I went to the web app in the portal to set status to on
Created an access policy
and then receive the same error with a different ip
Forbidden (HTTP 403). Failed to complete operation. Message:
Client address is not authorized and caller is not a trusted service.
Client address: 4.234.201.129 Caller:
My web app ip address would change every time an update were made, so are there any suggestions how to overcome this?
It might depend on your exact use case and what you want to achieve with your tests, but you could consider using a test double instead of the real Azure Key Vault while running your app locally or on CI.
If you are interested please feel free to check out Lowkey Vault.
I found solution by setting up a virtual network,
and then whitelisting it in the keyvault access rights

Docker login to Gitea registry fails even though curl succeeds

I'm using Gitea (on Kubernetes, behind an Ingress) as a Docker image registry. On my network I have gitea.avril aliased to the IP where it's running. I recently found that my Kubernetes cluster was failing to pull images:
Failed to pull image "gitea.avril/scubbo/<image_name>:<tag>": rpc error: code = Unknown desc = failed to pull and unpack image "gitea.avril/scubbo/<image_name>:<tag>": failed to resolve reference "gitea.avril/scubbo/<image_name>:<tag>": failed to authorize: failed to fetch anonymous token: unexpected status: 530
While trying to debug this, I found that I am unable to login to the registry, even though curling with the same credentials succeeds:
$ curl -k -u "scubbo:$(cat /tmp/gitea-password)" https://gitea.avril/v2/_catalog
{"repositories":[...populated list...]}
# Tell docker login to treat `gitea.avril` as insecure, since certificate is provided by Kubernetes
$ cat /etc/docker/daemon.json
{
"insecure-registries": ["gitea.avril"]
}
$ docker login -u scubbo -p $(cat /tmp/gitea-password) https://gitea.avril
WARNING! Using --password via the CLI is insecure. Use --password-stdin.
Error response from daemon: Get "https://gitea.avril/v2/": received unexpected HTTP status: 530
The first request shows up as a 200 OK in the Gitea logs, the second as a 401 Unauthorized.
I get a similar error when I kubectl exec onto the Gitea container itself, install Docker, and try to docker login localhost:3000 - after an error indicating that server gave HTTP response to HTTPS client, it falls back to the http protocol and similarly reports a 530.
I've tried restart Gitea with GITEA__log__LEVEL=Debug, but that didn't result in any extra logging. I've also tried creating a fresh user (in case I have some weirdness cached somewhere) and using that - same behaviour.
EDIT: after increasing log level to Trace, I noticed that successful attempts to curl result in the following lines:
...rvices/auth/basic.go:67:Verify() [T] [638d16c4] Basic Authorization: Attempting login for: scubbo
...rvices/auth/basic.go:112:Verify() [T] [638d16c4] Basic Authorization: Attempting SignIn for scubbo
...rvices/auth/basic.go:125:Verify() [T] [638d16c4] Basic Authorization: Logged in user 1:scubbo
whereas attempts to docker login result in:
...es/container/auth.go:27:Verify() [T] [638d16d4] ParseAuthorizationToken: no token
This is the case even when doing docker login localhost:3000 from the Gitea container itself (that is - this is not due to some authentication getting dropped by the Kubernetes Ingress).
I'm not sure what could be causing this - I'll start up a fresh Gitea registry to compare.
EDIT: in this Github issue, the Gitea team pointed out that standard docker authentication includes creating a Bearer token which references the ROOT_URL, explaining this issue.
Text below preserved for posterity:
...Huh. I have a fix, and I think it indicates some incorrect (or, at least, unexpected) behaviour; but in fairness it only comes about because I'm doing some pretty unexpected things as well...
TL;DR attempting to docker login to Gitea from an alternative domain name can result in an error if the primary domain name is unavailable; apparently because, while doing so, Gitea itself makes a call to ROOT_URL rather than localhost
Background
Gitea has a configuration variable called ROOT_URL. This is, among other things, used to generate the copiable "HTTPS" links from repo pages. This is presumed to be the "main" URL on which users will access Gitea.
I use Cloudflared Tunnels to make some of my Kubernetes services (including Gitea) available externally (on <foo>.scubbo.org addresses) without opening ports to the outside world. Since Cloudflared tunnels do not automatically update DNS records when a new service is added, I have written a small tool[0] which can be run as an initContainer "before" restarting the Cloudflared tunnel, to refresh DNS[1].
Cold-start problem
However, now there is a cold-start problem:
(Unless I temporarily disable this initContainer) I can't start Cloudflared tunnels if Gitea is unavailable (because it's the source for the initContainer's image)
Gitea('s public address) will be unavailable until Cloudflared tunnels start up.
To get around this cold-start problem, in the Cloudflared initContainers definition, I reference the image by a Kubernetes Ingress name (which is DNS-aliased by my router) gitea.avril rather than by the public (Cloudflared tunnel) name gitea.scubbo.org. The cold-start startup sequence then becomes:
Cloudflared tries to start up, fails to find a registry at gitea.avril, continues to attempt
Gitea (Pod and Ingress) start up
Cloudflared detects that gitea.avril is now responding, pulls the Cloudflared initContainer image, and successfully deploys
gitea.scubbo.org is now available (via Cloudflared)
So far, so good. Except that testing now indicates[2] that, when trying to docker login (or docker pull, or presumably, many other docker commands) to a Gitea instance will result in a call to the ROOT_URL domain - which, if Cloudflared isn't up yet, will result in an error[3].
So what?
My particular usage of this is clearly an edge case, and I could easily get around this in a number of ways (including moving my "Cloudflared tunnel startup" to a separately-initialized, only-privately-available registry). However, what this reduces to is that "docker API calls to a Gitea instance will fail if the ROOT_URL for the instance is unavailable", which seems like unexpected behaviour to me - if the API call can get through to the Gitea service in the first place, it should be able to succeed in calling itself?
However, I totally recognize that the complexity of fixing this (going through and replacing $ROOT_URL with localhost:$PORT throughout Gitea) might not be worth the value. I'll open an issue on the Gitea team, but I'd be perfectly content with a WILLNOTFIX.
Footnotes
[0]: Note - depending on when you follow that link, you might see a red warning banner indicating "_Your ROOT_URL in app.ini is https://gitea.avril/ but you are visiting https://gitea.scubbo.org/scubbo/cloudflaredtunneldns_". That's because of this very issue!
[1]: Note from the linked issue that the Cloudflared team indicate that this is unexpected usage - "We don't expect the origins to be dynamically added or removed services behind cloudflared".
[2]: I think this is new behaviour, as I'm reasonably certain that I've done a successful "cold start" before. However, I wouldn't swear to it.
[3]: After I've , the error is instead error parsing HTTP 404 response body: unexpected end of JSON input: "" rather than the 530-related errors I got before. This is probably a quirk of Cloudflared's caching or DNS behaviour. I'm working on a minimal reproducing example that circumvents Cloudflared.

How to deploy To Azure App Service WebSite from Docker Hub using Bicep

Summary:
I have made many attempts to deploy simple C# Blazor image in public DockerHub repo to Azure App Service web site. All attempts using bicep and the azure portal have failed.
Goal:
Use bicep inside of a Github action (CI/CD pipeline) to deploy from public DockerHub repo to Azure App Service Web Site. (I'm also curious as to how to do it on the portal).
What Works:
This powershell command successfully deploys my DockerHub image to the Azure App Service Web site:
az.cmd webapp create --name DockerhubDeployDemo004 --resource-group rg_ --plan Basic-ASP -s siegfried01 -w topsecretet --deployment-container-image-name siegfried01/demovisualstudiocicdforblazorserver
This bicep for creating an azure container instance also works.
Error Messages from Failed Attempts:
From the log files in the azure portal I get:
2022-05-20T21:50:35.914Z ERROR - DockerApiException: Docker API responded with status code=NotFound, response={"message":"pull access denied for demovisualstudiocicdforblazorserver, repository does not exist or may require 'docker login': denied: requested access to the resource is denied"}
2022-05-20T21:50:35.915Z ERROR - Pulling docker image docker.io/demovisualstudiocicdforblazorserver failed:
2022-05-20T21:50:35.916Z WARN - Image pull failed. Defaulting to local copy if present.
2022-05-20T21:50:35.923Z ERROR - Image pull failed: Verify docker image configuration and credentials (if using private repository)
2022-05-20T21:50:35.928Z INFO - Stopping site dockerdeploydemo003 because it failed during startup.
/home/LogFiles/2022_05_20_lw1sdlwk000FX5_docker.log (https://dockerdeploydemo003.scm.azurewebsites.net/api/vfs/LogFiles/2022_05_20_lw1sdlwk000FX5_docker.log)
2022-05-20T21:35:47.559Z WARN - Image pull failed. Defaulting to local copy if present.
2022-05-20T21:35:47.562Z ERROR - Image pull failed: Verify docker image configuration and credentials (if using private repository)
Failing Bicep Code:
I tried exporting the ARM code from the successful powershell deployment and the failed portal attempts and converting it to bicep. In both cases the code was very similar. In both cases I had to add/edit the app settings containing the dockerhub URL, account and password. I always received the above error messages. After deploying using bicep code, I could go back into the portal and view the appsettings (dockerhub creds & URL). They looked correct.
References:
Nice DockerHub example but no bicep code.. Says to use index.docker.io for the server and I tried that (did not work). I also tried using https://index.docker.io/v1/ for the server URL and that did not work either.
Nice Bicep Example but uses ACR instead of DockerHub
Another nice Bicep Example that uses ACR instead of DockerHub.
I was surprised I could not find the documentation on the DockerHub site!
Please help me correct my bicep code. I suspect I'm not specifying the correct URL or server for DockerHub.
Thanks
Siegfried
I could not find the web page on Dockerhub that gave the detailed information I was looking for (like the URL). However, the docker Info command as described here was very helpful.
This bicep code did the trick for me (with some help from the bicep support on github):
var appConfigNew = {
DOCKER_ENABLE_CI: 'true'
DOCKER_REGISTRY_SERVER_PASSWORD: dockerhubPassword
DOCKER_REGISTRY_SERVER_URL: 'https://index.docker.io/v1/'
DOCKER_REGISTRY_SERVER_USERNAME: dockerUsername
}
resource appSettings 'Microsoft.Web/sites/config#2021-01-15' = {
name: 'appsettings'
parent: web
properties: appConfigNew
}
And lastly, I discovered this by trial and error:
linuxFxVersion: 'DOCKER|${dockerUsername}/demovisualstudiocicdforblazorserver:${tag}'
Wow! I really worked hard for this one!

Login Issue with Weblogic in Docker

I created a Weblogic generic container for version 12.1.3 based on the official Docker images from Oracle at https://github.com/oracle/docker-images/tree/master/OracleWebLogic/dockerfiles
Command: buildDockerImage.sh -g -s -v 12.1.3
This creates the image oracle/weblogic:12.1.3-generic
Using a modified version of sample dockerfile for 1213-domain, I built the Weblogic container.
Note: changed the base image to be generic, instead of developer
docker build -t 1213-domain --build-arg ADMIN_PASSWORD="admin123" -f myDockerfile .
Pushed the built image to Amazon ECR and ran the container using the AWS ECS. Configured the port mappings as 0:7001, set memory soft limit as 1024, nothing else changed in default ECS settings. I have an application load balancer in the front, which receives traffic at 443 port and forwards to the containers. In the browser I get a login page for Weblogic, when I enter username as weblogic and password as admin123, I get the error:
Authentication Denied
Interestingly when I go to the container and connect to the weblogic using WLST, it works fine.
[ec2-user#ip-10-99-103-141 ~]$ docker exec -it 458 bash
[oracle#4580238db23f mydomain]$ /u01/oracle/oracle_common/common/bin/wlst.sh
Initializing WebLogic Scripting Tool (WLST) ...
Jython scans all the jar files it can find at first startup. Depending on the system, this process may take a few minutes to complete, and WLST may not return a prompt right away.
Welcome to WebLogic Server Administration Scripting Shell
Type help() for help on available commands
wls:/offline> connect("weblogic","admin123","t3://localhost:7001")
Connecting to t3://localhost:7001 with userid weblogic ...
Successfully connected to Admin Server "AdminServer" that belongs to domain "mydomain".
Warning: An insecure protocol was used to connect to the
server. To ensure on-the-wire security, the SSL port or
Admin port should be used instead.
wls:/mydomain/serverConfig>
Any hints on what can be going wrong?
Very interesting indeed. :) .. You are sure there is no special characters or so when you entering the username and password. Try typing the same if you are coping.
Also as backup, since you are able to login to WLST you can try two option.
Resetting the current password of weblogic or try adding new username and password.
below link will help
http://middlewarebuzz.blogspot.com/2013/06/weblogic-password-reset.html
or
http://middlewaremagic.com/weblogic/?p=4962

Remote Access to Secured Jenkins Server

I have a Jenkins installation on a machine running Windows Server 2008. The Jenkins installation is secured using Jenkins own user database with matrix-based security authorization. Anonymous users don't have any access, except to register an account. I have set up an account and gave this account full access.
Now I'd like to trigger a build remotely from a different machine that hosts the repository. I believe this should be possible by accessing the following URL:
https://[username]:[user_api_token]#[address.of.jenkins]:8080/job/[project]/build?token=[project_api_token]
However, this does not seem to be working for me. When I access this site in a browser, Jenkins forwards to the login-page, and does not start the build.
What am I doing wrong? It seems to be an authentication problem, as I'm not logged in after opening the URL above. Furthermore, if I give anonymous users full access, the URL works.
Try invoking the build from a command-line program like curl:
curl http://[userid]:[user_token]#localhost:8080/job/[project]/build?token=[proj_token]
or
curl --user [userid]:[user_token] http://localhost:8080/job/[project]/build?token=[proj_token]
I think your issue could be browser related, embedding credentials within the URL (Firefox pops up a warning in my case telling me I'm about to login to Jenkins)

Resources