Avro IDL generate one schema file with multiple records - avro

I am currently using Avro IDL to generate my classes. However, when generating them a seperate avro schema is generated per record even though they are all nested into one. Is there a way that I can create ONE schema that holds all the records?
protocol CodeQLEvents_v1 {
record InBoundRequestv1 {
string WorkflowId;
string WorkflowInstanceId;
string RepoName;
string OwnerName;
array<string> AdmiralProductId;
}
record Model {
string WorkflowExecutionId;
string ReportId;
string ApplicationId;
}
}

Related

return base64 string from graphql api in rails

I have a graphql api and I want to return a pdf file.
In the first step, I will need to define the query. For that query will a string type returned value can work?
field :pdf, String
Returning a PDF file as a String could work, if encoded as Base64.
This pdf field would be defined in your schema as a nullable String,
e.g. in your own custom type:
type YourType {
pdf: String
}
or as a nullable String-based return value of your own GraphQL query:
Query {
yourQuery(someParam: String!): String
}
Be aware of providing a serverside query/field resolver (a custom data fetcher),
that takes your PDF file, creates a Base64-encoded String of it and returns that String.
On your frontend, this encoded String obviously needs to be decoded again...

Web Api OData Query for custom type

I've defined my own struct that represents DateTime with TimeZoneInfo so I can work with UTC time while keeping the information about timezone.
I would like to get these objects with OData query, but it fails when I try to use $orderby on this properties of this type. I was able to get results when I queried $orderBy=Timestamp/Value/UniversalTime but I would like to use just $orderBy=Timestamp
Is there any possibility to order collection with this type?
public struct DateTimeWithZone : IComparable<DateTime>, IComparable<DateTimeWithZone>, IFormattable
{
private readonly DateTime _utcDateTime;
private readonly TimeZoneInfo _timeZone;
public DateTimeWithZone(DateTime dateTime, TimeZoneInfo timeZone)
{
if (timeZone == null)
{
throw new NoNullAllowedException(nameof(timeZone));
}
_utcDateTime = DateTime.SpecifyKind(dateTime, DateTimeKind.Utc);
_timeZone = timeZone;
}
...
}
With model defined like this:
public class ClientViewModel
{
public string Name { get; set; }
public DateTimeWithZone? Timestamp { get; set; }
}
And this is how it is used:
public IHttpActionResult GetAll(ODataQueryOptions<ClientViewModel> options)
{
var fromService = _clientsClient.GetAllClients().MapTo<ClientViewModel>(MappingStrategy).AsQueryable();
var totalCount = fromService.Count();
var results = options.ApplyTo(fromService); // <-- Fails here
return Ok(new PageResult<ClientViewModel>(
results as IEnumerable<ClientViewModel>,
Request.ODataProperties().NextLink,
totalCount));
}
Fails with The $orderby expression must evaluate to a single value of primitive type.
we had some similar issue with complex type ordering. Maybe this can be of assistance in your scenario as well. In our case (which is not 100% identical) we use a two phase approach:
Rewriting ODataQueryOptions
separating the extneral model (ODATA) and the internal model (EntityFramework in our case)
Rewriting ODataQueryOptions
You mention that the format $orderBy=Timestamp/Value/UniversalTime is accepted and is processed properly by ODATA. So you can rewrite the value basically by extracting the $orderby value and reinserting it with in your working format.
I described two ways on how to do this in my post Modifying ODataQueryOptions on the fly (full code included), which take existing options recreate new options by constructing a new Uri. In your case you would extract Timestamp from $orderBy=Timestamp and reinsert as with $orderBy=Timestamp/Value/UniversalTime.
Separating External and Internal Model
In addition, we used two models for the public facing API and the internal / persistence layer. On the internal side we used different properties which we grouped into a navigation property (which only exists on the public side). With this approach the user is able to specify an option via an $expand=VirtualNavigationProperty/TimeZoneInfo and $orderby=.... Internally you do not have to use the complex data type, but keep using DateTimeOffset which already holds that information. I described this separation and mapping of virtual navigation properties in the following post:
Separating your ODATA Models from the Persistence Layer with AutoMapper
More Fun with your ODATA Models and AutoMapper
According to your question it should be sufficient to rewrite the query options in the controller as you did mention that the (little bit longer) $orderby format is already working as expected and you only wanted a more convenient query syntax.
Regards, Ronald

how to create column dynamically in grails 2.4.4 using GORM

I want to create a table in my database based on file upload. Means once upload the csv file then only that should create tables in my database and for each csv file a new table should be created dynamically.
now am trying to upload file from controller. from controller i should pass fields values to domain class and tables name same as the csv file name how can i solve this?
A relational database really isn't a good fit for a dynmaic persistence model, something like MongoDB would be better. The best you can do with a GORM domain model is something like
class CsvFile {
String fileName
static hasMany = [rows: Row]
}
class Row {
static belongsTo = [csvFile: CsvFile]
Map<String, Object> fields
}
I realise this doesn't exactly match your requirement insofar as each CSV file being stored in a separate table, but that's just not really practical with a GORM domain model being stored in a relational database.

Is it possible to map a spring data entity field to another field in elasticsearch?

I have this model:
public class Foo{
#Field(type = FieldType.String, store = true)
String color;
}
right now it maps to the field "color" in elasticsearch document. Can I map it to another field: "shirtColor"? Maybe through an annotation?
spring-data-elasticsearch uses Jackson Object Mapper to serialize the POJO into json. You can use #JsonProperty attribute if you want to change the name of field which is store in Elastic Search.
public class Foo{
#Field(type = FieldType.String, store = true)
#JsonProperty("shirtColor")
String color;
}
However you will loose the benefit of using findBy* methods while querying the data back from elastic search and you will have to write your own custom queries to fetch the data.

Dapper.NET mapping with Data Annotations

So I have a class with a property like this:
public class Foo
{
[Column("GBBRSH")
public static string Gibberish { get; set;}
....
}
For saving data, I have it configured so that the update/insert statements use a custom function:
public static string GetTableColumnName(PropertyInfo property)
{
var type = typeof(ColumnAttribute);
var prop = property.GetCustomAttributes(type, false);
if (propr.Count() > 0)
return ((ColumnAttribute)prop.First()).Name;
return property.Name;
}
This handles fine, but I noticed that when I go to retrieve the data, it isn't actually pulling data back via the function for this particular column. I noticed that the other data present was pulled, but the column in question was the only field with data that didn't retrieve.
1) Is there a way to perhaps use the GetTableColumnName function for the retrieval part of Dapper?
2) Is there a way to force Dapper.NET to throw an exception if a scenario like this happens? I really don't want to have a false sense of security that everything is working as expected when it actually isn't (I get that I'm using mapping that Dapper.NET doesn't use by default, but I do want to set it up in that manner).
edit:
I'm looking in the SqlMapper source of Dapper and found:
private static IEnumerable<T> QueryInternal<T>(params) // my knowledge of generics is limited, but how does this work without a where T : object?
{
...
while (reader.Read())
{
yield return (T)func(reader);
}
...
}
so I learned about two things after finding this. Read up on Func and read up on yield (never used either before). My guess is that I need to pass reader.Read() to another function (that checks against column headers and inserts into objects appropriately) and yield return that?
You could change your select statement to work with aliases like "SELECT [Column("GBBRSH")] AS Gibberish" and provide a mapping between the attribute name and the poco property name.
That way, Dapper would fill the matching properties, since it only requires your POCO's to match the exact name of the column.

Resources