For a spring boot application I was using the docker image tomcat:jdk11-openjdk-buster from 24-Aug-2021 and the swagger-ui.html would load properly.
Today if I build the Dockerfile again with absolutely no changes to the code or Dockerfile, the swagger-ui.html gives,
The weird thing is if I do a curl -I http://xyzxyz.com/warfile/swagger-ui.html I still get 200
HTTP/1.1 200
Date: Fri, 17 Dec 2021 14:16:19 GMT
Content-Type: text/html
Content-Length: 3381
Connection: keep-alive
Accept-Ranges: bytes
Last-Modified: Sun, 14 Jan 2018 16:12:50 GMT
I'm convinced something in the docker image has changed. My initial thought going through the issue list at https://github.com/docker-library/tomcat/issues/252 was maybe the http request is redirecting to https.
But if that were the case wouldn't curl give http 302 instead of a 200?
Is there a way I can know changes made to tomcat:jdk11-openjdk-buster image since August?
If it helps I use path based routing using AWS application load balancer, ECS service and container instance as environment
I checked the Dockerfile for tomcat:jdk11-openjdk-buster and realised the tomcat version has been upgraded from 9.x.x to 10.x.x which was causing my app to not run.
So had to change the base image to tomcat:9-jdk11-openjdk which will at least fix the Tomcat version to 9.x.x and openjdk to 11.x.x
Related
I have a Docker container, that hosts an express site. All it does, is run sharp - and resizes an image.
This works totally fine. I run this container locally, and it does exactly what I want it to.
But then, when I deploy this image to gcloud (Google Run) using gcloud run build, and gcloud run deploy, the container deploys fine and I can access it, but now the exact same URL that I ran locally (except for swapping out the domain), it returns the image garbled as content-type html. This is from DevTools looking at the Network Tab at the response headers.
content-length: 4990
content-type: text/html
date: Fri, 24 Jul 2020 06:11:44 GMT
server: Google Frontend
status: 200
x-powered-by: Express
And this is the response I see when I run the container locally.
Connection: keep-alive
Date: Fri, 24 Jul 2020 06:20:21 GMT
Transfer-Encoding: chunked
X-Powered-By: Express
I don't understand how I could get a different response (not withstanding any additional artefact headers Google might add) - when it is the same container I am running? Especially because gcloud results in text/html, and when I use the local container I see the resized image in my browser.
Any ideas where I might look? As I said - it works flawlessly if I run the container locally using Docker.
The test URL is:
https://image-processor-74rmztaj4a-uc.a.run.app/test
For what it's worth, the relevant express route is, but as I said - there is no real problem with the actual code ... It's more with the deployment to Google Cloud Run...
const sharp = require("sharp");
const express = require("express");
const fs = require("fs");
const app = express();
app.get("/test", (req, res) => {
const transform = sharp()
.resize(200, 200, { fit: "inside" })
.toFormat("jpeg");
const readStream = fs.createReadStream("./src/image.jpg");
return readStream.pipe(transform).pipe(res);
});
const port = process.env.PORT || 8080;
app.listen(port);
It looks like the content-type was being modified by Cloud run, but the solution on this situations is just to specify the content-type for the response to make it work as expected.
A little info on our setup:
On-Prem TFS 2018 Update 2 (running as domain Service Account A)
Separate Win2k16 VM hosting a build agent (using a domain Service Account B to run the agent)
Package feed hosted inside a collection on TFS
So this has worked for a few months now (the new piece being the package feed for our own-developed NuGet packages). On an MVC project utilizing our build servers, the NuGet Restore task fails when trying to connect to our package feed. On fail, the message is:
http://TFS_URL:8080/tfs/Development/_packaging/CustomNuGetFeed/nuget/v3/index.json: Unable to load the service index for source http://TFS_URL:8080/tfs/Development/_packaging/CustomNuGetFeed/nuget/v3/index.json.
Response status code does not indicate success: 401 (Unauthorized).
That's all the info that the build log spits out; I dove deeper and spun up WireShark and got the following:
Request
GET /tfs/Development/_packaging/CustomNuGetFeed/nuget/v3/index.json HTTP/1.1
user-agent: NuGet Command Line/4.4.1 (Microsoft Windows NT 6.2.9200.0)
X-NuGet-Client-Version: 4.4.1
Accept-Language: en-US
Accept-Encoding: gzip, deflate
Authorization: Basic <base64_token>
Host: tfs:8080
Response
HTTP/1.1 401 Unauthorized
Content-Type: text/html
Server: Microsoft-IIS/8.5
X-TFS-ProcessId: d9a45aba-cc82-4f2c-98a3-e4441bfa456f
ActivityId: e780f2d6-1216-46ac-8c66-cb89379c7811
X-TFS-Session: e780f2d6-1216-46ac-8c66-cb89379c7811
X-VSS-E2EID: e780f2d6-1216-46ac-8c66-cb89379c7811
X-FRAME-OPTIONS: SAMEORIGIN
WWW-Authenticate: Bearer
WWW-Authenticate: Negotiate
WWW-Authenticate: NTLM
WWW-Authenticate: Basic realm="tfs"
X-Powered-By: ASP.NET
P3P: CP="CAO DSP COR ADMa DEV CONo TELo CUR PSA PSD TAI IVDo OUR SAMi BUS DEM NAV STA UNI COM INT PHY ONL FIN PUR LOC CNT"
Lfs-Authenticate: NTLM
X-Content-Type-Options: nosniff
Date: Tue, 16 Oct 2018 19:57:17 GMT
Content-Length: 1293
Response page message
401 - Unauthorized: Access is denied due to invalid credentials.
You do not have permission to view this directory or page using the credentials that you supplied.
However, there's a .NET Core app that still restores packages fine as far as I can tell (unless it's only retrieving the packages from cache).
The credentials for the service account have not changed at all. I've made sure the service accounts have access to the feed, according to these docs: https://learn.microsoft.com/en-us/azure/devops/artifacts/feeds/feed-permissions?view=vsts&tabs=previous-nav
I've also tried disabling Basic Auth in IIS for the TFS site on the TFS server, and enabling Windows auth. Neither of which worked either.
So I'm at a loss at what the issue could be from all that I've tried/looked into.
TL;DR; nuget was too old. Updating it helped.
We had the same problem after setting up a new build agent machine for a TFS 2018 on Windows 2019. However, we did not use wireshark to inspect traffic, so this might be unrelated. Same symptoms though: one solution worked (using paket), the other didn't work (using nuget).
The issue was that the other solution used a nuget (version 2.x) from a committed thirdparty directory. Nuget is designed to use a 'global' nuget from %localappdata%\nuget if available. And that global nuget version didn't exist. Updating nuget as the build agent user fixed the issue, and placed a recent nuget version into %localappdata%:
nuget.exe update -self
I think that the TFS 2018 requires NTLM authentication instead of basic authentication (which still seemed to be supported by TFS 2017). It still strikes me as weird that the installed VS 2017.9.5 didn't update nuget.
Even after upgrading to Azure DevOps Server 2019.0.1, I was still receiving a 401 Unauthorized when attempting to authenticate to a package feed that was hosted in the same collection.
Workaround
The work around I had used was to place the package binaries inside the build server's package cache folder, located here:
C:\Users\.nuget\packages
Working Solution
However a solution has been found with the help of Microsoft's VS Community. An updated cred provider needs to be used with the NuGet Restore task.
Nuget 4.8+ needs to be used for this to work, and then 2 build variables need to be added to the build definition:
NuGet_ForceEnableCredentialProviderV2 = true
NuGet.ForceEnableCredentialProvider = false
According to a Microsoft rep, this will be enabled by default on ADOS 2019.1 update.
You can view the full thread here:
https://developercommunity.visualstudio.com/content/problem/360323/tfs-build-agent-cant-connect-to-package-feed.html
I'm given access to a Rails application which is hosted on a Linux machine. I need to setup a redirection from www to the root domain and, although it can be done via the application using before_filter or route.rb, it's better to do it via the web server. But I don't know what web server it's working on. It may or may not be nginx - I don't know. This is what I need to find out. How can I do that?
Launch your browser.
Open the developer tools, go to the Network tab
Navigate to the web app.
On the left panel, click on any of the loaded resources.
Select the Headers tab and find the Server header within the Response headers section.
This screenshot was made with Chrome (it's slightly different with Firefox or IE)
You can run the following in command line:
curl -I www.yoururl.com
or
curl -I youripaddress
When I run this for my server I see:
HTTP/1.1 200 OK
Server: nginx
Date: Sat, 27 Dec 2014 02:20:01 GMT
Content-Type: text/html; charset=utf-8
Connection: keep-alive
Status: 200 OK
This is a super quick way to find out! :)
Good luck!!
I have used RabbitMQ 2.8.6 on Ubuntu 11.10 with success quite long time
and recently decided to upgrade it up to newest version (3.1.x).
I use the custom monitoring for RabbitMQ by using management plugin possibilities.
Unfortunately, on newest version this monitor didn't work by some unknown reason.
I have even tried to send simple request to check workability
curl -i -u guest:guest http://127.0.0.1:55672/api/overview
and get strange response
HTTP/1.1 301 Moved Permanently
Server: MochiWeb/1.0 (Any of you quaids got a smint?)
Location: http://localhost:15672/api/overview
Date: Sat, 11 May 2013 09:37:04 GMT
Content-Length: 0
instead of (as before)
HTTP/1.1 200 OK
Server: MochiWeb/1.1 WebMachine/1.9.0 (someone had painted it blue)
Date: Sat, 11 May 2013 09:54:49 GMT
Content-Type: application/json
Content-Length: 1659
Cache-Control: no-cache
{"management_version":"2.8.6","statistics_level":"fine",...
I noticed that MochiWeb server version is downgraded. Is it a RabbitMQ BUG or maybe my mistake?
Check the official docs here:
http://www.rabbitmq.com/blog/2012/11/19/breaking-things-with-rabbitmq-3-0/
The Management plugins now listens to 15672 and not 55672
Also, it is clear from the response that the page is permanently moved to http://localhost:15672/api/overview
I have setup a MVC website in my own IIS7(Windows 7) but no files under the folder Scripts(that is in the root of website) is found? If I upload the solution to the webhost or runns it in VS mini IIS it works fine?
I have set this in the web.config :
<authorization>
<allow users="*"/>
</authorization>
I have given full access to the entire website folder(with subfolders/subfiles) to the following users : IIS_IUser, NETWOR SERVICES, IUSR.
The site is runned by the 4.0 application pool that runnes by the ApplicationPoolIdentity.
I have tried this url : localhost/Scripts/ but this will only return a 404 error.
When checking the explorer I can see that there is files?
Edit: Images and CSS do work but not Script folder and the content. I have dubblechecked the rights.
Didn't you by accident forget to install the Static feature of IIS? You can enable this when you open Control Panel > Programs > Turn Windows features on or off > Internet Information Services > World Wide Web Services > Common HTTP Features, there make sure the checkbox before Static Content is checked.
Screenshot:
I have got the same issue on a brand new Win 2k8 R2 server running asp.net.
Did you ever solve the problem?
using httpie if found that the headers were different for normal 404 and 404 errors on the Scripts folder:
HTTP/1.1 404 Not found
Content-Length: 0
Date: Wed, 19 Sep 2012 14:27:15 GMT
Server: Microsoft-HTTPAPI/2.0
Version: 11.0.11282.0
as opposed to
HTTP/1.1 404 Not Found
Cache-Control: private
Content-Length: 5222
Content-Type: text/html; charset=utf-8
Date: Wed, 19 Sep 2012 14:21:32 GMT
Server: Microsoft-IIS/7.5
for any other random folder.
Am currently investigating if it is SQL Server Reporting Services that is blocking it. ... appears not
searching on server version brings up QlikView forums ... that is installed on server, so investigating that now ... stopped QlikView and it fixed it.