Read storage data of a "mapping (address => bool)" from another contract - storage

I'm trying to read the storage data of another contract, and get all of the addresses in testData:
contract XXX {
mapping (address => bool) public testData;
...
}
According to the Mappings and Dynamic Arrays document, each value is stored with a storage location keccak256(key).
Since this mapping is keyed with address, I'm unable to predicate the key, how can I retrieve all the keys?

Solidity mapping is similar to hash maps.
It acts as a pointer to a specific storage location determined by the key (and the property position). But there's no list of used keys, unless you create the list yourself. Example: See map.keys here.

Related

How to derive record types in F#?

I'm inserting data into Azure CosmosDB via FSharp.ComosDb. Here is the record type that I write in the DB:
[<CLIMutable>]
type DbType =
{ id: Guid
Question: string
Answer: int }
The persistence layer works fine but I face an inelegant redundancy. The record I'm inserting originates from the Data Transfer Object (DTO) with the following shape:
type DataType =
{ QuestionId: Guid
Question: string
Answer: int }
CosmosDb accepts only records with a lowercase id. Is there any way to derive the DbType from DataType or I have to define DbType from scratch?
Is there anything a la copy and update record expression record2 = { record1 with id = record1.QuestionId } but at the type level?
There's no type-level way of deriving one record type from another the way you describe, you can however get reasonably close with the addition of anonymous records in F# 4.6.
type DataType =
{ QuestionId: Guid
Question: string
Answer: int }
let example =
{ QuestionId = Guid.NewGuid()
Question = "The meaning of life etc."
Answer = 42 }
let extended =
{| example with id = example.QuestionId |}
This gives you a value of an anonymous record type with an added field, and may be well suited to your scenario, however it's unwieldy to write code against such type once it leaves the scope of the function you create it in.
If all you care is how this single field is named - serialization libraries usually have ways of providing aliases for field names (like Newtonsoft.Json's JsonProperty attribute). Note that this might be obscured from you by the CosmosDb library you're using, which I'm not familiar with.
Another more involved approach is to use generic envelope types so that the records you persist have a uniform data store specific header across your application:
type Envelope<'record> =
{
id: string
// <other fields as needed>
payload: 'record
}
In that case the envelope contains the fields that your datastore expects to be there to fulfill the contract (+ any application specific metadata you might find useful, like timestamps, event types, versions, whatnot) and spares you the effort of defining data store specific versions of each type you want to persist.
Note that it is still a good idea in general to decouple the internal domain types from the representation you use for storage for maintainability reasons.

Saving record in RavenDb with F# adding extra Id column

When I save a new F# Record, I'm getting an extra column called Id# in the RavenDb document, and it shows up when I load or view the object in code; it's even being converted to JSON through my F# API.
Here is my F# record type:
type Campaign = { mutable Id : string; name : string; description : string }
I'm not doing anything very exciting to save it:
let save c : Campaign =
use session = store.OpenSession()
session.Store(c)
session.SaveChanges()
c
Saving a new instance of a record creates a document with the Id of campaigns/289. Here is the full value of the document in RavenDb:
{
"Id#": "campaigns/289",
"name": "Recreating Id bug",
"description": "Hello StackOverflow!"
}
Now, when I used this same database (and document) in C#, I didn't get the extra Id# value. This is what a record looks like when I saved it in C#:
{
"Description": "Hello StackOverflow!",
"Name": "Look this worked fine",
}
(Aside - "name" vs "Name" means I have 2 name columns in my document. I understand that problem, at least).
So my question is: How do I get rid of the extra Id# property being created when I save an F# record in RavenDb?
As noted by Fyodor, this is caused by how F# generates a backing field when you create a record type. The default contract resolver for RavenDB serializes that backing field instead of the public property.
You can change the default contract resolver in ravendb. It will look something like this if you want to use the Newtonsoft Json.Net:
DocumentStore.Conventions.JsonContractResolver <- new CamelCasePropertyNamesContractResolver()
There is an explanation for why this works here (see the section titled: "The explanation"). Briefly, the Newtonsoft library uses the public properties of the type instead of the private backing fields.
I also recommend, instead of having the mutable property on the Id, you can put the [<CLIMutable>] attribute on the type itself like:
[<CLIMutable>]
type Campaign = { Id : string; name : string; description : string }
This makes it so libraries can mutate the values while preventing it in your code.
This is a combination of... well, you can't quite call them "bugs", so let's say "non-straightforward features" in both F# compiler and RavenDb.
The F# compiler generates a public backing field for the Id record field. This field is named Id# (a standard pattern for all F# backing fields), and it's public, because the record field is mutable. For immutable record fields, backing fields will be internal. Why it needs to generate a public backing field for mutable record fields, I don't know.
Now, RavenDb, when generating the schema, apparently looks at both properties and fields. This is a bit non-standard. The usual practice is to consider only properties. But alas, Raven picks up the public field named Id#, and makes it part of the schema.
You can combat this problem in two ways:
First, you could make the Id field immutable. I'm not sure whether that would work for you or RavenDb. Perhaps not, since the Id is probably generated on insert.
Second, you could declare your Campaign not as an F# record, but as a true class:
type Campaign( id: int, name: string, description: string ) =
member val Id = id with get, set
member val name = name
member val description = description
This way, all backing fields stay internal and no confusion will arise. The drawback is that you have to write every field twice: first as constructor argument, then as class member.

F# Turning XmlProvider data into Records

I am pulling in some XML data using XmlProvider, and I will be accessing it from C#. As you can't use type provided fields directly from C#, I need create record out of them. I can do this by hand but I believe this should be possible to automate using reflection. If I create record types with the same names and types as the fields in the type provider, I should be able to use something like FSharpValue.MakeRecord(typeof<MyType>,values) where values is an array of objects.
What I don't know is how to get the array of values out of the type provider, and how to handle nested records, for instance:
type Address =
{
Address1 : string
City : string
State : string
}
type Client =
{
Id : int
FullName : string
Address : Address
}
In this case Client contains one Address. Will I need to walk the tree and use MakeRecord on the leaves and work my way up?
If you're willing to hand code the types, why do you need the type provider in the first place?
If you're doing some additional logic on F# side, you'll have no choice but to create the records manually anyway. And if you're not doing anything, you can just use the .NET out of the box serializer (or another library) to create them from xml.

Breeze manage NODB EntityTypes with DB EntityTypes

i´m using the Papa's course CCJS code to investigate Breeze.js and SPA. Using this code i´m trying to manage aditional information that cames from server but that is not an Entity contained in the Metadata that cames from EntityFramework.
So i created a NO-DB class called Esto and a Server method like Lookups:
[HttpGet]
public object Informacion()
{
var a = new Esto(....);
var b = new Esto(.....);
var c = new Esto(......);
return new {a,b,c};
}
then in model.js inside configureMetadataStore i call:
metadataStore.addEntityType({
shortName: "Esto",
namespace:"CodeCamper",
dataProperties:{
id: {dataType: breeze.DataType.Int32,isPartOfKey: true},
name: {dataType: breeze.DataType.String}
}
};
and also define in the model entityNames array: esto:'Esto' as an Entity
now in the context.js i load this creating a server side method like getLookups but called getInformacion:
function getInformacion(){
return EntityQuery.from('Informacion')
.using(manager).execute()
}
and then inside primeData in the success method call this:
datacontext.informacion = {
esto: getLocal('Esto',nombre)};
where getLocal is:
function getLocal(resource, ordering)
{
var query = EntityQuery.from(resource).orderBy(ordering);
return manager.executeQueryLocally(query);
}
I get an error in the query contained in the getLocal that states that Can not find EntityType for either entityTypeName: 'undefined' or resourceName:'Esto'.
What i´m doing wrong?
Thanks
You were almost there! :-) Had you specified the target EntityType in the query I think it would have worked.
Try this:
var query = EntityQuery.from(resource).orderBy(ordering).toType('Esto');
The toType() method tells Breeze that the top-level objects returned by this query will be of type Esto.
Why?
Let's think about how Breeze interprets a query specification.
Notice that you began your query, as we usually do, by naming the resource which will supply the data. This resource is typically a path segment to a remote service endpoint, perhaps the name of a Web API controller method ... a method named "Foos".
It's critical to understand that the query resource name is rarely the same as the EntityType name! They may be similar - "Foos" (plural) is similar to the type name "Foo" (singular). But the resource name could be something else. It could be "GetFoos" or "GreatFoos" or anything at all. What matters is that the service method returns "Foo" entities.
Breeze needs a way to correlate the resource name with the EntityType name. Breeze doesn't know the correlation on its own. The toType() method is one way to tell Breeze about it.
Why do remote queries work without toType()?
You generally don't add toType() to your queries. Why now?
Most of the time [1], Breeze doesn't need to know the EntityType until after the data arrive from the server. When the JSON query results includes the type name (as they do when they come from a Breeze Web API controller for example), Breeze can map the arriving JSON data into entities without our help ... assuming that these type names are in metadata.
Local cache queries are different
When you query the cache ... say with executeQueryLocally ... Breeze must know which cached entity-set to search before it can query locally.
It "knows" if you specify the type with toType(). But if you omit toType(), Breeze has to make do with the query's resource name.
Breeze doesn't guess. Instead, it looks in an EntityType/ResourceName map for the entity-set that matches the query resource name.
The resource name refers to a service endpoint, not a cached entity-set. There is no entity-set named "Informacion", for example. So Breeze uses an EntityType/ResourceName map to find the entity type associated with the query resource name.
EntityType/ResourceName
The EntityType/ResourceName map is one of the items in the Breeze MetadataStore. You've probably never heard of it. That's good; you shouldn't have to think about it ... unless you do something unusual like define your own types.
The map of a new MetadataStore starts empty. Breeze populates it from server metadata if those metadata contain EntityType/Resource mappings.
For example, the Breeze EFContextProvider generates metadata with mappings derived from DbSet names. When you define a Foo class and exposed it from a DbContext as a DbSet named "Foos", the EFContextProvider metadata generator adds a mapping from the "Foos" resource name to the Foo entity type.
Controller developers tend to use DbSet names for method names. The conventional Breeze Web API controller "Foo" query method looks like this:
[Get]
public IQueryable<Foo> Foos() {...}
Now if you take a query such as this:
var query = EntityQuery.from('Foos').where(...);
and apply it to the cache
manager.query.executeLocally(query).then(...);
it just works.
Why? Because
"Foos" is the name of a DbSet on the server
The EFContextProvider generated metadata mapping ["Foos" to Model.Foo]
The Web API Controller offers a Foos action method.
The BreezeJS query specifies "Foos"
The executeLocally method finds the ["Foos"-to-Model.Foo] mapping in metadata and applies the query to the entity-set for Foo.
The end-to-end conventions work silently in your favor.
... until you mention a resource name that is not in the EntityType/ResourceName map!
Register the resource name
No problem!
You can add your own resource-to-entity-type mappings as follows:
var metadataStore = manager.metadataStore;
var typeName = 'some-type-name';
var entityType = metadataStore.getEntityType(typeName);
metadataStore.setEntityTypeForResourceName(resource, entityType);
Breeze is also happy with just the name of the type:
metadataStore.setEntityTypeForResourceName(resource, typeName);
In your case, somewhere near the top of your DataContext, you could write:
var metadataStore = manager.metadataStore;
// map two resource names to Esto
metadataStore.setEntityTypeForResourceName('Esto', 'Esto');
metadataStore.setEntityTypeForResourceName('Informacion', 'Esto');
Don't over-use toType()
The toType() method is a good short-cut solution when you need to map the top-level objects in the query result to an EntityType. You don't have to mess around with registering resource names.
However, you must remember to add toType() to every query that needs it. Configure Breeze metadata with the resource-to-entity-type mapping and you'll get the desired behavior every time.
Notes
[1] "Most of the time, Breeze doesn't need to know the EntityType until after the data arrive from the server." One important exception - out of scope for this discussion - is when the query filter involves a Date/Time.
I think that the problem here is that you are assuming that entity type names and resource names are the same thing. A resource name is what is used to execute a query
var q = EntityQuery.from(resourceName);
In your case the "resourceName" is "Informacion" and the entityType is actually "Esto". Breeze is able to make this connection on a remote query because it can examine the results returned from the server as a result of querying "Informacion" and seeing that they are actually "Esto" instances. This is not possible for a local query because Breeze doesn't know what local collection to start from.
In this case you need to give Breeze a little more information via the MetadataStore.setEntityTypeForResourceName method. Something like this:
var estoType = manager.metadataStore.getEntityType("Esto");
manager.metadataStore.setEntityTypeForResourceName("Informacion", estoType);
Note that this is not actually necessary if the resource was defined via Entity Framework metadata, because Breeze automatically associates all EF EntitySet names to resource names, but this information isn't available for DTO's.
Note also that a single entity type can have as many resourceNames as you like. Just make sure to register the resourceNames before you attempt a local query.

Map of other types than Strings in Grails

I created simple domain class with map within it.
class Foo {
Map bar
}
Bar mapping will be created as sth like:
create table foo_bar (bar bigint, bar_idx varchar(255),
bar_elt varchar(255) not null);
...as stated in http://www.grails.org/GORM+-+Collection+Types:
The static hasMany property defines
the type of the elements within the
Map. The keys for the map MUST be
strings.
Now my question is - is it possible to create map of values other than Strings? I can achieve that using pure Hibernate (element mapping) - any ideas how to port this to Grails?
I think you meant if it's possible to create map of KEYS other than Strings.
It is not possible: all keys must be Strings, while values can be whatever type you want.
A way to achieve what you want is using some unique identifier for the type of class you want as key of your map.
Say you want a Map persisted in your database and say you have two instances: objectA and objectB you want to persist in your map, which name is "relationship":
relationship."objectA.toString()" = objectB
That should work. Changet toString() with hashCode(), getId() or whatever thing that gives you a unique String that identifies that object and only that, and you got it.

Resources