Cloud Run (managed) Service is throwing "PERMISSION_DENIED: The caller does not have permission" when using Cloud Logging - google-cloud-run

I have a running user-service managed Cloud Run Service. It is written in Kotlin and Spring Boot and I added the cloud logging library to it and added the logback.xml configuration neccessary.
A short prove:
// build.gradle.kts
implementation("com.google.cloud:google-cloud-logging-logback:0.119.4-alpha")
// logback.xml
<configuration>
<springProfile name="!cloud, debug">
<appender name="CONSOLE"
class="ch.qos.logback.core.ConsoleAppender">
<layout class="ch.qos.logback.classic.PatternLayout">
<Pattern>
%black(%d{ISO8601}) %highlight(%-5level) [%blue(%t)] %yellow(%C{1.}): %msg%n%throwable
</Pattern>
</layout>
</appender>
<root level="info">
<appender-ref ref="CONSOLE" />
</root>
</springProfile>
<springProfile name="cloud">
<appender name="CLOUD" class="com.google.cloud.logging.logback.LoggingAppender"/>
<root level="info">
<appender-ref ref="CLOUD" />
</root>
</springProfile>
</configuration>
So I am running my application via Cloud Run with the SPRING_ACTIVE_PROFILES=cloud so the cloud logging part of the logback.xml should be active. It is - however it is throwing hundreds of exceptions
I have no idea what I did wrong. I followed this guide: https://cloud.google.com/logging/docs/setup/java
But there was nothing mentioned about any kind of authorization? So I can not explain the PERMISSION_DENIED issue here.
Here is the gcloud run services describe user-service output:
Traffic: https://user-service-53fsfabwe-ew.a.run.app
100% LATEST (currently user-service-00051-xab)
Ingress: all
Last updated on 2021-01-23T10:08:05.204462Z by me#gmail.com:
Revision user-service-00051-xab
commit-sha:1bc273274cf191de6a4712d3f5b6f3cbafce42d2 gcb-build-id:07265ff6-f79b-4b1c-964a-41b4363856c2 gcb-trigger-id:8f88b2c2-eb93-4d3d-89a0-d841061f38c6 managed-by:gcp-cloud-build-deploy-cloud-run
Image: eu.gcr.io/mvp-prototype/user-service/user-service:1bc273274cf191de6a4712d3f5b6f3cbafce42d2
Port: 8080
Memory: 512Mi
CPU: 1000m
Service account: user-service#mvp-prototype.iam.gserviceaccount.com
Env vars:
AUTH0_CLIENT_ID <nope>
AUTH0_CLIENT_SECRET <nope>
AUTH0_DOMAIN <nope>
SPRING_PROFILES_ACTIVE cloud
SQL_CONNECTION 10.28.96.3
SQL_PASSWORD test
SQL_USER test
Concurrency: 80
Max Instances: 1
SQL connections: mvp-prototype:europe-west1:prototype
Timeout: 300s
VPC connector:
Name: projects/mvp-prototype/locations/europe-west1/connectors/cloud-run-to-cloud-sql
Egress: private-ranges-only
Since the service-account is custom I checked if the proper IAM role for writing to logs is set. I therefore added the Logs Writer role to Service account: user-service#mvp-prototype.iam.gserviceaccount.com
However that did not help either. Still receiving the exception.
Here is the detailed log trace:
com.google.cloud.logging.LoggingException: io.grpc.StatusRuntimeException: PERMISSION_DENIED: The caller does not have permission
at com.google.cloud.logging.spi.v2.GrpcLoggingRpc$2.apply(GrpcLoggingRpc.java:201)
at com.google.cloud.logging.spi.v2.GrpcLoggingRpc$2.apply(GrpcLoggingRpc.java:195)
at com.google.api.core.ApiFutures$GaxFunctionToGuavaFunction.apply(ApiFutures.java:240)
at com.google.common.util.concurrent.AbstractCatchingFuture$CatchingFuture.doFallback(AbstractCatchingFuture.java:224)
at com.google.common.util.concurrent.AbstractCatchingFuture$CatchingFuture.doFallback(AbstractCatchingFuture.java:212)
at com.google.common.util.concurrent.AbstractCatchingFuture.run(AbstractCatchingFuture.java:124)
at com.google.common.util.concurrent.DirectExecutor.execute(DirectExecutor.java:30)
at com.google.common.util.concurrent.AbstractFuture.executeListener(AbstractFuture.java:1215)
at com.google.common.util.concurrent.AbstractFuture.complete(AbstractFuture.java:983)
at com.google.common.util.concurrent.AbstractFuture.setException(AbstractFuture.java:771)
at com.google.api.core.AbstractApiFuture$InternalSettableFuture.setException(AbstractApiFuture.java:95)
at com.google.api.core.AbstractApiFuture.setException(AbstractApiFuture.java:77)
at com.google.api.gax.rpc.BatchedFuture.setException(BatchedFuture.java:55)
at com.google.api.gax.rpc.BatchedRequestIssuer.sendResult(BatchedRequestIssuer.java:84)
at com.google.api.gax.rpc.BatchExecutor$1.onFailure(BatchExecutor.java:98)
at com.google.api.core.ApiFutures$1.onFailure(ApiFutures.java:68)
at com.google.common.util.concurrent.Futures$CallbackListener.run(Futures.java:1041)
at com.google.common.util.concurrent.DirectExecutor.execute(DirectExecutor.java:30)
at com.google.common.util.concurrent.AbstractFuture.executeListener(AbstractFuture.java:1215)
at com.google.common.util.concurrent.AbstractFuture.complete(AbstractFuture.java:983)
at com.google.common.util.concurrent.AbstractFuture.setException(AbstractFuture.java:771)
at com.google.api.gax.retrying.BasicRetryingFuture.handleAttempt(BasicRetryingFuture.java:198)
at com.google.api.gax.retrying.CallbackChainRetryingFuture$AttemptCompletionListener.handle(CallbackChainRetryingFuture.java:135)
at com.google.api.gax.retrying.CallbackChainRetryingFuture$AttemptCompletionListener.run(CallbackChainRetryingFuture.java:117)
at com.google.common.util.concurrent.DirectExecutor.execute(DirectExecutor.java:30)
at com.google.common.util.concurrent.AbstractFuture.executeListener(AbstractFuture.java:1215)
at com.google.common.util.concurrent.AbstractFuture.complete(AbstractFuture.java:983)
at com.google.common.util.concurrent.AbstractFuture.setException(AbstractFuture.java:771)
at com.google.api.core.AbstractApiFuture$InternalSettableFuture.setException(AbstractApiFuture.java:95)
at com.google.api.core.AbstractApiFuture.setException(AbstractApiFuture.java:77)
at com.google.api.gax.grpc.GrpcExceptionCallable$ExceptionTransformingFuture.onFailure(GrpcExceptionCallable.java:97)
at com.google.api.core.ApiFutures$1.onFailure(ApiFutures.java:68)
at com.google.common.util.concurrent.Futures$CallbackListener.run(Futures.java:1041)
at com.google.common.util.concurrent.DirectExecutor.execute(DirectExecutor.java:30)
at com.google.common.util.concurrent.AbstractFuture.executeListener(AbstractFuture.java:1215)
at com.google.common.util.concurrent.AbstractFuture.complete(AbstractFuture.java:983)
at com.google.common.util.concurrent.AbstractFuture.setException(AbstractFuture.java:771)
at io.grpc.stub.ClientCalls$GrpcFuture.setException(ClientCalls.java:563)
at io.grpc.stub.ClientCalls$UnaryStreamToFuture.onClose(ClientCalls.java:533)
at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:617)
at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:803)
at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:782)
at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:123)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:630)
at java.base/java.lang.Thread.run(Thread.java:832)
Caused by: com.google.api.gax.rpc.PermissionDeniedException: io.grpc.StatusRuntimeException: PERMISSION_DENIED: The caller does not have permission
at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:55)
at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
... 21 common frames omitted
Caused by: io.grpc.StatusRuntimeException: PERMISSION_DENIED: The caller does not have permission
at io.grpc.Status.asRuntimeException(Status.java:533)
... 13 common frames omitted

Related

docker-maven-plugin wait for log

When I do a docker-compose up for a docker-compose.yml containing only a db image
version: '3.3'
services:
db_stations:
image: mysql:5.7
...
I can see the following logs:
...
db_stations_1 | 2022-02-03T21:54:04.688034Z 0 [Note] Event Scheduler: Loaded 0 events
db_stations_1 | 2022-02-03T21:54:04.688369Z 0 [Note] mysqld: ready for connections.
...
So, when using the docker-maven-plugin to raise the docker when I run my service, I have the following configuration:
<build>
<plugins>
<plugin>
<groupId>io.fabric8</groupId>
<artifactId>docker-maven-plugin</artifactId>
<version>0.38.1</version>
<configuration>
<images>

</images>
</configuration>
<executions>
...
It crahses with
[ERROR] Failed to execute goal io.fabric8:docker-maven-plugin:0.38.1:start (start) on project stations-service: I/O Error: [mysql:5.7] "stations_db": Container stopped with exit code 1 unexpectedly after 512 ms while waiting on log out 'ready for connections' -> [Help 1]
Not event waiting for 50000ms to crash. Curiously enough, if I change the log tag by the following:
<log>a</log>
It works perfectly. So I have two different questions:
Why does it crash even not waiting the expected time?
How can I make it work with my desired string?
Many thanks in advance.
In my case, it works if I enclose the <log> pattern in a CDATA block, like this:
<log><![CDATA[
port: 3306 MySQL Community Server - GPL
]]>
</log>
<time>30000</time>
I add a 30 sec timeout just in case. The default timeout is only 10 sec.
Hope it helps!

jib-maven-plugin I/O error for image [registry-1.docker.io/library/adoptopenjdk]

I have developed a Dockerized Spring Boot Application using as base image AdoptOpenJDK and using jib-maven-plugin.
My plugin configuration is:
<plugin>
<groupId>com.google.cloud.tools</groupId>
<artifactId>jib-maven-plugin</artifactId>
<version>${jib-maven-plugin.version}</version>
<configuration>
<from>

</from>
<to>

<tags>
<tag>latest</tag>
<tag>${project.version}</tag>
</tags>
</to>
<container>
<entrypoint>
<shell>bash</shell>
<option>-c</option>
<arg>/entrypoint.sh</arg>
</entrypoint>
<ports>
<port>8080</port>
</ports>
<environment>
<SPRING_OUTPUT_ANSI_ENABLED>ALWAYS</SPRING_OUTPUT_ANSI_ENABLED>
<JHIPSTER_SLEEP>0</JHIPSTER_SLEEP>
</environment>
<creationTime>USE_CURRENT_TIMESTAMP</creationTime>
</container>
<extraDirectories>
<paths>src/main/jib</paths>
<permissions>
<permission>
<file>/entrypoint.sh</file>
<mode>755</mode>
</permission>
</permissions>
</extraDirectories>
</configuration>
</plugin>
Everything is OK, and the app is builded correctly when launch ./mvnw package -Pprod -DskipTests jib:build -T16.0C. Now I'm integrating my app in a CI/CD Jenkins Pipeline and I'm creating a command like the first but passing Auth data using variables:
./mvnw -ntp -T2.0C jib:build -Djib.from.auth.username=myUserName -Djib.from.auth.password=mygitlabtoken01 -Dimage=registry.gitlab.com/myapp -X
When I run it i get:
[INFO] Using credentials from Docker config (/Users/myUser/.docker/config.json) for registry.gitlab.com/neoris-emea-internal/ianthe/ianthe-app/ianthe
[DEBUG] attempting bearer auth for registry.gitlab.com/app...
[INFO] The base image requires auth. Trying again for adoptopenjdk:11-jre-hotspot...
[INFO] Using credentials from <from><auth> for adoptopenjdk:11-jre-hotspot
[DEBUG] Trying basic auth for adoptopenjdk:11-jre-hotspot...
[DEBUG] configured basic auth for registry-1.docker.io/library/adoptopenjdk
[DEBUG] TIMED Authenticating push to registry.gitlab.com : 1091.927 ms
[DEBUG] TIMED Building and pushing image : 1122.522 ms
[ERROR] I/O error for image [registry-1.docker.io/library/adoptopenjdk]:
[ERROR] javax.net.ssl.SSLHandshakeException
[ERROR] Remote host terminated the handshake
I do not understand anything:
Why jib plugin is using my .docker/config.json if I have indicated the auth info with -Djib.from.auth.username=myUserName?
Why am I getting SSLHandshakeException? Although the build is using my credentials, these are correct.
If you look at the log messages carefully, Jib did use your credentials you specified via from.auth.username|password for adoptopenjdk (which is hosted on Docker Hub).
Using credentials from <from><auth> for adoptopenjdk:11-jre-hotspot
Note the following line says the Docker config is used for registry.gitlab.com (the target registry).
Using credentials from Docker config (/Users/myUser/.docker/config.json) for registry.gitlab.com/neoris-emea-internal/ianthe/ianthe-app/ianthe
About the SSLHandshakeException, it has nothing to do with any Docker credentials. The error is from a much lower network layer (TLS protocol), so the failure is fundamentally unrelated to Jib or any application running in the JVM on Jenkins. It is basically telling you that any Java app on the JVM just cannot make a secure TLS connection to some hosts. There is no simple answer or solution to a TLS handshake failure, so get some help from a network and TLS expert if possible. Also check out other SO questions like this one.

HTTPS in docker in Service Fabric for asp.net core not working

I have a asp.net core application hosted in docker . The docker file looks like this
FROM mcr.microsoft.com/dotnet/core/aspnet:3.1
LABEL cmbappname="autocomplete"
ARG source
WORKDIR /cmbapp
ADD ${source} .
ENV APP_UTILS=C:\\app
VOLUME ${APP_UTILS}
HEALTHCHECK --retries=5 --interval=100s --start-period=10s CMD curl --fail http://localhost || exit 1
ENTRYPOINT ["dotnet", "MyBus.AutoApi.dll"]
EXPOSE 80
EXPOSE 443
the docker image in hosted in service fabric which has a service manifest like this
<?xml version="1.0" encoding="utf-8"?>
<ServiceManifest Name="AutoApiPkg"
Version="1.0.0"
xmlns="http://schemas.microsoft.com/2011/01/fabric"
xmlns:xsd="http://www.w3.org/2001/XMLSchema"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<ServiceTypes>
<!-- This is the name of your ServiceType.
The UseImplicitHost attribute indicates this is a guest service. -->
<StatelessServiceType ServiceTypeName="AutoApiType" UseImplicitHost="true" />
</ServiceTypes>
<!-- Code package is your service executable. -->
<CodePackage Name="Code" Version="1.0.0">
<EntryPoint>
<!-- Follow this link for more information about deploying Windows containers to Service Fabric: https://aka.ms/sfguestcontainers -->
<ContainerHost>
<ImageName>autoApi</ImageName>
</ContainerHost>
</EntryPoint>
<!-- Pass environment variables to your container: -->
<EnvironmentVariables>
<EnvironmentVariable Name="ASPNETCORE_ENVIRONMENT" Value="Debug" />
<EnvironmentVariable Name="ASPNETCORE_URLS" Value="https://*:443/;http://*:80/;https://*:54100/;http://*:54200/"/>
</EnvironmentVariables>
</CodePackage>
with the container policies in the Applicaiton manifest
<Policies>
<ContainerHostPolicies CodePackageRef="Code" AutoRemove="false" UseDefaultRepositoryCredentials="false" ContainersRetentionCount="2" RunInteractive="true">
<!-- See https://aka.ms/I7z0p9 for how to encrypt your repository password -->
<PortBinding ContainerPort="443" EndpointRef="AutApiTypeEndpoint" />
<PortBinding ContainerPort="80" EndpointRef="LocalAutApiTypeEndpoint" />
<RepositoryCredentials AccountName="[AzureContainerUserName]" Password="[AzureContainerPassword]" PasswordEncrypted="false"/>
<HealthConfig IncludeDockerHealthStatusInSystemHealthReport="true" RestartContainerOnUnhealthyDockerHealthStatus="false" />
</ContainerHostPolicies>
</Policies>
the application runs and is functional without the enviornment variable "ASPNETCORE_URLS"
but when adding the env variable its not functional nor is it reachable.
debugging the container gives the following error logs
Unable to start Kestrel. System.InvalidOperationException: Unable to
configure HTTPS endpoint. No server certificate was specified, and the
default developer certificate could not be fo und or is out of date.
Get a certificate, for example by using Letsencrypt [example], or use a self-signed certificate (for testing).
Use a volume to attach the certificate file to your container.
Use an environment variable to indicate where the certificate is stored:
ASPNETCORE_Kestrel__Certificates__Default__Path=certificate.pfx
Use another environment variable to provide the password to allow access to the private key:
ASPNETCORE_Kestrel__Certificates__Default__Password="****"
More info here.

How to mount ignite data to host system

I would like to run apache-ignite in docker and I am able to do that.
But the problem is whenever I spin the image and creates the tables in ignite it stays there as long as that container is running. If I restart the container or starts the ignite image again I did not get that data. I know whenever we spins the images it always creates the new container. In my case if I want to persist the data then I need to commit and push the container so that next start I will get it.
But is there any way, where I can store ignite data on host system and whenever I start the image it will read/write the data on that location (in short volume mounting).
Can anyone please share there experience or thought with example?
Thanks.
I am using this with docker-compose and below is my docker-compose.yml file.
version: "3.7"
services:
ignite:
image: apacheignite/ignite
environment:
- IGNITE_QUIET=false
volumes:
- "./ignite-main.xml:/opt/ignite/apache-ignite/config/default-config.xml"
ports:
- 11211:11211
- 47100:47100
- 47500:47500
- 49112:49112
If I run the docker-compose up command then I get the below error.
Recreating ignite-test_ignite_1 ... done
Attaching to ignite-test_ignite_1
ignite_1 | Ignite Command Line Startup, ver. 2.7.0#20181130-sha1:256ae401
ignite_1 | 2018 Copyright(C) Apache Software Foundation
ignite_1 |
ignite_1 | class org.apache.ignite.IgniteException: Failed to instantiate Spring XML application context [springUrl=file:/opt/ignite/apache-ignite/config/default-config.xml, err=Line 1 in XML document from URL [file:/opt/ignite/apache-ignite/config/default-config.xml] is invalid; nested exception is org.xml.sax.SAXParseException; lineNumber: 1; columnNumber: 31; cvc-elt.1: Cannot find the declaration of element 'property'.]
#Update Hello, After doing lots of RnD I able to solve this issue. Below are the configuration I made 1. docker-compose.yml
version: "3.5"
services:
ignite:
image: apacheignite/ignite
environment:
- IGNITE_QUIET=false
volumes:
- ignite-persistence-1:/opt/ignite/
- "./ignite_1.xml:/opt/ignite/apache-ignite/config/default-config.xml"
ports:
- 11211:11211
- 47100:47100
- 47500:47500
- 49112:49112
deploy:
replicas: 1
restart_policy:
condition: on-failure
delay: 30s
max_attempts: 10
window: 180s
volumes:
ignite-persistence-1:
ignite_1.xml for data persistence
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->
<beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd">
<bean id="ignite.cfg" class="org.apache.ignite.configuration.IgniteConfiguration">
<!-- Enabling Apache Ignite Persistent Store. -->
<property name="dataStorageConfiguration">
<bean class="org.apache.ignite.configuration.DataStorageConfiguration">
<property name="defaultDataRegionConfiguration">
<bean class="org.apache.ignite.configuration.DataRegionConfiguration">
<property name="persistenceEnabled" value="true" />
</bean>
</property>
</bean>
</property>
<property name="workDirectory" value="/opt/ignite/apache-ignite/data" />
<!-- Explicitly configure TCP discovery SPI to provide a list of initial nodes. -->
<property name="discoverySpi">
<bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
<property name="ipFinder">
<!-- Uncomment static IP finder to enable static-based discovery of initial nodes. -->
<!--<bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">-->
<bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.multicast.TcpDiscoveryMulticastIpFinder">
<property name="addresses">
<list>
<!-- In distributed environment, replace with actual host IP address. -->
<value>127.0.0.1:47500..47502</value>
</list>
</property>
</bean>
</property>
</bean>
</property>
</bean>
</beans>
I kept docker-compose.yml and ignite_1.xml in same directory and opened the terminal from this directory and executed the below command.
docker-compose up
By using ignite-persistence-1:/opt/ignite/ I was able to persist the data even if I stop or down the docker-compose.
I hope this will help others as well.
Thanks.

Messages from Gelf4Net are not stored in Graylog2

I have an Ubuntu server with Elasticsearch, MongoDB, and Graylog2 running in Azure, and I have an asp.net mvc4 application I am trying to send logs from. (I am using Gelf4Net / Log4Net as the logging component). To cut to the chase, nothing is being logged.
(skip to the update to see what is wrong)
The setup
1 Xsmall Ubuntu VM running the needed software for graylog2
everything is running as a daemon
1 Xsmall cloud service with the MVC4 app (2 instnaces)
A virtual network setup so they can talk.
So what have I tried?
From the linux box the follow command will cause a message to be logged echo "<86>Dec 24 17:05:01 foo-bar CRON[10049]: pam_unix(cron:session):" |
nc -w 1 -u 127.0.0.1 514
I can change the IP address to use the public IP and it works fine as well.
using this powershell script I can log the same message from my dev machine as well as the production web server
Windows firewall turned off and it still doesn't work.
I can log to a FileAppender Log4Net, so I know Log4Net is working.
tailing the graylog2.log shows nothing of interest. Just a few warning about my plugin directory
So I know everything is working, but I can't get the Gelf4Net appender to work. I'm a loss here. Where can I look? Is there something I am missing
GRAYLOG2.CONF
#only showing the connection stuff here. If you need something else let me know
syslog_listen_port = 514
syslog_listen_address = 0.0.0.0
syslog_enable_udp = true
syslog_enable_tcp = false
web.config/Log4Net
//application_start() has log4net.Config.XmlConfigurator.Configure();
<log4net >
<root>
<level value="ALL" />
<appender-ref ref="GelfUdpAppender" />
</root>
<appender name="GelfUdpAppender" type="Gelf4net.Appender.GelfUdpAppender, Gelf4net">
<remoteAddress value="public.ip.of.server"/>
<remotePort value="514" />
<layout type="Gelf4net.Layout.GelfLayout, Gelf4net">
<param name="Facility" value="RandomPhrases" />
</layout>
</appender>
</log4net>
update
for some reason it didn't occur to me to run graylog in debug mode :) Doing so shows this message.
2013-04-09 03:00:56,202 INFO : org.graylog2.inputs.syslog.SyslogProcessor - Date could not be parsed. Was set to NOW because allow_override_syslog_date is true.
2013-04-09 03:00:56,202 DEBUG: org.graylog2.inputs.syslog.SyslogProcessor - Skipping incomplete message.
So it is sending an incomplete message. How can I see what is wrong with it?
I was using the wrong port (DOH!)
I should have been using the port specified in graylog2.config / gelf_listen_port = 12201
so my web.config/log4net/gelf appender should have had
<appender name="GelfUdpAppender" type="Gelf4net.Appender.GelfUdpAppender, Gelf4net">
...
<remotePort value="12201" />
...
</appender>
For anyone who may have the same problem, make sure Log4Net reloads the configuration after you change it. I don't have it set to watch the config file for changes, so it took me a few minutes to realize that I was using the wrong port. When I changed it from 514 to 12201 the first time, messages still weren't getting though. I had to restart the server for Log4Net to pick up the new config, and then it started to work.

Resources