How to convert Map<String, Object> to Map<Key, Value> in dart? - dart

I have this Map of String, Object I want it to be flattened so the result is Map of Key, Value
void main(){
var li = {"qotd_date":"2019-05-18T00:00:00.000+00:00","quote":{"id":61869,"dialogue":false,"private":false,"tags":[],"url":"https://favqs.com/quotes/wael-ghonim/61869-the-power-of--","favorites_count":1,"upvotes_count":0,"downvotes_count":0,"author":" Wael Ghonim ","author_permalink":"wael-ghonim","body":"The power of the people is greater than the people in power."}};
print(li);
}
Current Output :
{qotd_date: 2019-05-18T00:00:00.000+00:00, quote: {id: 61869, dialogue: false, private: false, tags: [], url: https://favqs.com/quotes/wael-ghonim/61869-the-power-of--, favorites_count: 1, upvotes_count: 0, downvotes_count: 0, author: Wael Ghonim , author_permalink: wael-ghonim, body: The power of the people is greater than the people in power.}}
Expected Output I want is
{"qotd_date":"2019-05-18T00:00:00.000+00:00","id":61869,"dialogue":false,"private":false,"tags":[],"url":"https://favqs.com/quotes/wael-ghonim/61869-the-power-of--","favorites_count":1,"upvotes_count":0,"downvotes_count":0,"author":" Wael Ghonim ","author_permalink":"wael-ghonim","body":"The power of the people is greater than the people in power."};
It is just like flattening the Map. Please help me get out of this?

Simply, encode as json:
import 'dart:convert';
void main() {
Map<String, dynamic> li = {
"qotd_date": "2019-05-18T00:00:00.000+00:00",
"quote": {
"id": 61869,
"dialogue": false,
"private": false,
"tags": [],
"url": "https://favqs.com/quotes/wael-ghonim/61869-the-power-of--",
"favorites_count": 1,
"upvotes_count": 0,
"downvotes_count": 0,
"author": " Wael Ghonim ",
"author_permalink": "wael-ghonim",
"body": "The power of the people is greater than the people in power."
}
};
Map<String, dynamic> flattened = {};
li.removeWhere((key, value) {
if (value is Map) {
flattened.addAll(value);
}
return value is Map;
});
flattened.addAll(li);
print(jsonEncode(flattened));
}
Output:
{"id":61869,"dialogue":false,"private":false,"tags":[],"url":"https://favqs.com/quotes/wael-ghonim/61869-the-power-of--","favorites_count":1,"upvotes_count":0,"downvotes_count":0,"author":" Wael Ghonim ","author_permalink":"wael-ghonim","body":"The power of the people is greater than the people in power.","qotd_date":"2019-05-18T00:00:00.000+00:00"}

Thanks to the #MehmetEsen solution, all you really need to add is jsonEncode, the work of jsonEncode is to convert object to a JSON string
import 'dart:convert';
void main(){
var li = {"qotd_date":"2019-05-18T00:00:00.000+00:00","quote":{"id":61869,"dialogue":false,"private":false,"tags":[],"url":"https://favqs.com/quotes/wael-ghonim/61869-the-power-of--","favorites_count":1,"upvotes_count":0,"downvotes_count":0,"author":" Wael Ghonim ","author_permalink":"wael-ghonim","body":"The power of the people is greater than the people in power."}};
print(jsonEncode(li));
}

Related

Avro schema evolution testing and questions

With the following Avro schemas defined and test code, I have a couple questions when considering Avro schema evolution and how the first version of the Avro data can be stored and later retrieved using the second version of the schema. In my example, Person.avsc represents the first version, and PersonWithMiddleName.avsc represents the second version, where we have added a middleName attribute.
Is there a way to store the Avro schema and the binary encoded data as a byte array in Java? We are wanting to store our Avro objects to DynamoDB, and we'd like to store the Avro data as a blob with the schema stored alongside it (just like it is when stored to a file)? As reference, look at my Test Output below (the binary contents didn't copy, so the line just reads The Person is now serialized to a byte array: JoeCool) and compare what gets stored when Person is serialized to a byte array vs. when it is written out during the test to the person.avro file. As you can see, it appears as though the schema is only written out with the file and not with the byte array.
Is the AvroTypeException I encounter during my test truly expected as I have indicated with my comments in the catch block of the test? In this case, I have serialized the Person object as JSON and tried to deserialize it as PersonWithMiddleName.
Java Test Code
import java.io.ByteArrayOutputStream;
import java.io.File;
import java.io.IOException;
import org.apache.avro.AvroTypeException;
import org.apache.avro.file.DataFileReader;
import org.apache.avro.file.DataFileWriter;
import org.apache.avro.io.BinaryDecoder;
import org.apache.avro.io.DatumReader;
import org.apache.avro.io.DecoderFactory;
import org.apache.avro.io.Encoder;
import org.apache.avro.io.EncoderFactory;
import org.apache.avro.io.JsonDecoder;
import org.apache.avro.specific.SpecificDatumReader;
import org.apache.avro.specific.SpecificDatumWriter;
import org.junit.jupiter.api.Assertions;
import org.junit.jupiter.api.Test;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
public class SchemaEvolutionTest {
Logger log = LoggerFactory.getLogger(this.getClass());
#Test
public void createAndReadPerson() {
// Create the Person using the Person schema
var person = new Person();
person.setFirstName("Joe");
person.setLastName("Cool");
log.info("Person has been created: {}", person);
SpecificDatumWriter<Person> personSpecificDatumWriter =
new SpecificDatumWriter<Person>(Person.class);
DataFileWriter<Person> dataFileWriter = new DataFileWriter<Person>(personSpecificDatumWriter);
try {
dataFileWriter.create(person.getSchema(), new File("person.avro"));
dataFileWriter.append(person);
dataFileWriter.close();
} catch (IOException e) {
Assertions.fail();
}
log.info("Person has been written to an Avro file");
// ******************************************************************************************************
// Next, read as Person from the Avro file using the Person schema
DatumReader<Person> personDatumReader =
new SpecificDatumReader<Person>(Person.getClassSchema());
var personAvroFile = new File("person.avro");
DataFileReader<Person> personDataFileReader = null;
try {
personDataFileReader = new DataFileReader<Person>(personAvroFile, personDatumReader);
} catch (IOException e1) {
Assertions.fail();
}
Person personReadFromFile = null;
while (personDataFileReader.hasNext()) {
// Reuse object by passing it to next(). This saves us from
// allocating and garbage collecting many objects for files with
// many items.
try {
personReadFromFile = personDataFileReader.next(person);
} catch (IOException e) {
Assertions.fail();
}
}
log.info("Person read from the file: {}", personReadFromFile.toString());
// ******************************************************************************************************
// Read the Person from the Person file as PersonWithMiddleName using only the
// PersonWithMiddleName schema
DatumReader<PersonWithMiddleName> personWithMiddleNameDatumReader =
new SpecificDatumReader<PersonWithMiddleName>(PersonWithMiddleName.getClassSchema());
DataFileReader<PersonWithMiddleName> personWithMiddleNameDataFileReader = null;
try {
personWithMiddleNameDataFileReader =
new DataFileReader<PersonWithMiddleName>(personAvroFile, personWithMiddleNameDatumReader);
} catch (IOException e1) {
Assertions.fail();
}
PersonWithMiddleName personWithMiddleName = null;
while (personWithMiddleNameDataFileReader.hasNext()) {
// Reuse object by passing it to next(). This saves us from
// allocating and garbage collecting many objects for files with
// many items.
try {
personWithMiddleName = personWithMiddleNameDataFileReader.next(personWithMiddleName);
} catch (IOException e) {
Assertions.fail();
}
}
log.info(
"Now a PersonWithMiddleName has been read from the file that was written as a Person: {}",
personWithMiddleName.toString());
// ******************************************************************************************************
// Serialize the Person to a byte array
byte[] personByteArray = new byte[0];
ByteArrayOutputStream personByteArrayOutputStream = new ByteArrayOutputStream();
Encoder encoder = null;
try {
encoder = EncoderFactory.get().binaryEncoder(personByteArrayOutputStream, null);
personSpecificDatumWriter.write(person, encoder);
encoder.flush();
personByteArray = personByteArrayOutputStream.toByteArray();
} catch (IOException e) {
log.error("Serialization error:" + e.getMessage());
}
log.info("The Person is now serialized to a byte array: {}", new String(personByteArray));
// ******************************************************************************************************
// Deserialize the Person byte array into a Person object
BinaryDecoder binaryDecoder = null;
Person decodedPerson = null;
try {
binaryDecoder = DecoderFactory.get().binaryDecoder(personByteArray, null);
decodedPerson = personDatumReader.read(null, binaryDecoder);
} catch (IOException e) {
log.error("Deserialization error:" + e.getMessage());
}
log.info("Decoded Person from byte array {}", decodedPerson.toString());
// ******************************************************************************************************
// Deserialize the Person byte array into a PesonWithMiddleName object
PersonWithMiddleName decodedPersonWithMiddleName = null;
try {
binaryDecoder = DecoderFactory.get().binaryDecoder(personByteArray, null);
decodedPersonWithMiddleName = personWithMiddleNameDatumReader.read(null, binaryDecoder);
} catch (IOException e) {
log.error("Deserialization error:" + e.getMessage());
}
log.info(
"Decoded PersonWithMiddleName from byte array {}", decodedPersonWithMiddleName.toString());
// ******************************************************************************************************
// Serialize the Person to JSON
byte[] jsonByteArray = new byte[0];
personByteArrayOutputStream = new ByteArrayOutputStream();
Encoder jsonEncoder = null;
try {
jsonEncoder =
EncoderFactory.get().jsonEncoder(Person.getClassSchema(), personByteArrayOutputStream);
personSpecificDatumWriter.write(person, jsonEncoder);
jsonEncoder.flush();
jsonByteArray = personByteArrayOutputStream.toByteArray();
} catch (IOException e) {
log.error("Serialization error:" + e.getMessage());
}
log.info("The Person is now serialized to JSON: {}", new String(jsonByteArray));
// ******************************************************************************************************
// Deserialize the Person JSON into a Person object
JsonDecoder jsonDecoder = null;
try {
jsonDecoder =
DecoderFactory.get().jsonDecoder(Person.getClassSchema(), new String(jsonByteArray));
decodedPerson = personDatumReader.read(null, jsonDecoder);
} catch (IOException e) {
log.error("Deserialization error:" + e.getMessage());
}
log.info("Decoded Person from JSON: {}", decodedPerson.toString());
// ******************************************************************************************************
// Deserialize the Person JSON into a PersonWithMiddleName object
try {
jsonDecoder =
DecoderFactory.get()
.jsonDecoder(PersonWithMiddleName.getClassSchema(), new String(jsonByteArray));
decodedPersonWithMiddleName = personWithMiddleNameDatumReader.read(null, jsonDecoder);
} catch (AvroTypeException ae) {
// Do nothing. We expect this since JSON didn't serialize anything out.
log.error(
"An AvroTypeException occurred trying to deserialize Person JSON back into a PersonWithMiddleName. Here's the exception: {}",ae.getMessage());
} catch (Exception e) {
log.error("Deserialization error:" + e.getMessage());
}
}
}
Person.avsc
{
"type": "record",
"namespace": "org.acme.avro_testing",
"name": "Person",
"fields": [
{
"name": "firstName",
"type": ["null", "string"],
"default": null
},
{
"name": "lastName",
"type": ["null", "string"],
"default": null
}
]
}
PersonWithMiddleName.avsc
{
"type": "record",
"namespace": "org.acme.avro_testing",
"name": "PersonWithMiddleName",
"fields": [
{
"name": "firstName",
"type": ["null", "string"],
"default": null
},
{
"name": "middleName",
"type": ["null", "string"],
"default": null
},
{
"name": "lastName",
"type": ["null", "string"],
"default": null
}
]
}
Test Output
Person has been created: {"firstName": "Joe", "lastName": "Cool"}
Person has been written to an Avro file
Person read from the file: {"firstName": "Joe", "lastName": "Cool"}
Now a PersonWithMiddleName has been read from the file that was written as a Person: {"firstName": "Joe", "middleName": null, "lastName": "Cool"}
The Person is now serialized to a byte array: JoeCool
Decoded Person from byte array {"firstName": "Joe", "lastName": "Cool"}
Decoded PersonWithMiddleName from byte array {"firstName": "Joe", "middleName": null, "lastName": "Cool"}
The Person is now serialized to JSON: {"firstName":{"string":"Joe"},"lastName":{"string":"Cool"}}
Decoded Person from JSON: {"firstName": "Joe", "lastName": "Cool"}
An AvroTypeException occurred trying to deserialize Person JSON back into a PersonWithMiddleName. Here's the exception: Expected field name not found: middleName
person.avro
Objavro.schema�{"type":"record","name":"Person","namespace":"org.acme.avro_testing","fields":[{"name":"firstName","type":["null","string"],"default":null},{"name":"lastName","type":["null","string"],"default":null}]}
For question one, I'm not a Java expert, but in Python instead of writing to an actual file, there is the concept of a file-like object that has the same interface as a file, but just writes to a byte buffer. So for example, instead doing this:
file = open(file_name, "wb")
# use avro library to write to file
file.close()
You can do something like this:
from io import BytesIO
bytes_interface = BytesIO()
# use bytes_interface the same way you would the previous "file" object
byte_output = bytes_interface.getvalue()
So that final byte_output would be the bytes that normally would have been written to a file but now is just a byte buffer that could be stored anywhere. Does Java have some concept like this? Alternatively I'm assuming there is some way in Java to read the file contents back into a byte buffer if you absolutely have to go through the process of writing an actual temporary file.
For question two, I think you are hitting the same problem mentioned in this Jira ticket: https://issues.apache.org/jira/browse/AVRO-2890. Currently the JSON decoder expects the schema the data was written with and can't do any sort of schema evolution with a different schema than the data was written with.

How to render a soundtrack in VexFlow?

Using Tone.js and I can play this melody:
const textMeasures = ['rest/4 B4/16 A4/16 G#4/16 A4/16',
'C5/8 rest/8 D5/16 C5/16 B4/16 C5/16',
'E5/8 rest/8 F5/16 E5/16 D#5/16 E5/16',
'B5/16 A5/16 G#5/16 A5/16 B5/16 A5/16 G#5/16 A5/16',
'C6/4 A5/8 C6/8',
'B5/8 A5/8 G5/8 A5/8',
'B5/8 A5/8 G5/8 A5/8',
'B5/8 A5/8 G5/8 F#5/8',
'E5/4'];
Now, I'd like to use VexFlow to render these notes on some staves.
Keeping in mind, the display should be fine on a mobile device, and that there could be multiple voices on the soundtrack.
For now I created a method to have a stave per measure:
private renderSoundtrackVexflow(name: string, soundtrack: Soundtrack) {
const context = this.renderVexflowContext(name, VEXFLOW_WIDTH, VEXFLOW_HEIGHT);
context.setFont('Arial', 10, '').setBackgroundFillStyle('#eed'); // TODO Hard coded values
if (soundtrack.hasTracks()) {
let staveIndex: number = 0;
const voices: Array<vexflow.Flow.Voice> = new Array<vexflow.Flow.Voice>();
for (const track of soundtrack.tracks) {
if (track.hasMeasures()) {
for (const measure of track.measures) {
if (measure.hasNotes()) {
const stave = new this.VF.Stave(0, staveIndex * (VEXFLOW_STAVE_HEIGHT + VEXFLOW_STAVE_MARGIN), VEXFLOW_WIDTH);
staveIndex++;
stave.setContext(context);
stave.addClef(VEXFLOW_CLEF);
stave.addTimeSignature(this.renderTimeSignature(measure));
stave.draw();
const notes = new Array<vexflow.Flow.StaveNote>();
const voice = new this.VF.Voice();
voice.setStrict(false);
voice.setContext(context);
voice.setStave(stave);
for (const placedNote of measure.placedNotes) {
const note: Note = placedNote.note;
notes.push(new this.VF.StaveNote({ keys: [ this.renderAbc(note) ], duration: this.renderDuration(note) }));
}
voice.addTickables(notes);
voices.push(voice);
}
}
const formatter = new this.VF.Formatter();
formatter.joinVoices(voices).format(voices, VEXFLOW_WIDTH);
for (const voice of voices) {
voice.draw();
console.log('Min voice width: ' + formatter.getMinTotalWidth(voice));
}
}
}
}
}
But the notes are displayed a bit on the staves, but some notes are displayed in between staves, on the page.
The question is a bit old, but in case you still need a solution...
The problem is that all notes have their stems oriented upwards. To fix this, add auto_stem: true and clef: "treble" to the option list passed to VF.StaveNote(), see VexFlow: Stem Direction for more information. Here is the corrected line that creates notes:
notes.push(new this.VF.StaveNote({
keys: [ this.renderAbc(note) ],
duration: this.renderDuration(note),
auto_stem: true,
clef: "treble"
}));

Flutter : Capture the Future response from a http call as a normal List

I am trying to make my previously static app, dynamic by making calls to the backend server.
This is what my service looks like
String _url = "http://localhost:19013/template/listAll";
Future<List<TemplateInfoModel>> fetchTemplates() async {
final response =
await http.get(_url);
print(response.statusCode);
if (response.statusCode == 200) {
// If the call to the server was successful, parse the JSON
Iterable l = json.decode(response.body);
List<TemplateInfoModel> templates = l.map((i) => TemplateInfoModel.fromJson(i)).toList();
return templates;
} else {
// If that call was not successful, throw an error.
throw Exception('Failed to load post');
}
}
And my model looks like this :
class TemplateInfoModel {
final String templateId;
final String messageTag;
final String message;
final String messageType;
TemplateInfoModel({this.templateId, this.messageTag, this.message,
this.messageType});
factory TemplateInfoModel.fromJson(Map<String, dynamic> json) {
return TemplateInfoModel ( templateId: json['templateId'], messageTag : json['messsageTag'], message : json ['message'] , messageType : json['messageType']);
}
}
I have a utils method within which I am capturing the http request/response data; which would then use that to create a DropDown Widget ( or display it within a text)
My earlier dummy data was a list; I am wondering how best I can convert this Future> to List
class SMSTemplatingEngine {
var _SMSTemplate; //TODO this becomes a Future<List<TemplateInfoModels>>
// TemplateInfoService _templateInfoService;
SMSTemplatingEngine(){
_SMSTemplate=fetchTemplates();
}
// var _SMSTemplate = {
// 'Reminder':
// 'Reminder : We’re excited to launch a new feature on our platform that will revolutionize your Facebook marketing and triple your ROI. Visit url.com to learn more',
// 'New Message':
// 'New Message: We’re excited to launch a new feature on our platform that will revolutionize your Facebook marketing and triple your ROI. Visit url.com to learn more',
// 'Connecting Again':
// 'Connecting Again : We’re excited to launch a new feature on our platform that will revolutionize your Facebook marketing and triple your ROI. Visit url.com to learn more',
// };
List<String> getKeys(){
List<String> smsKeys = new List();
for ( var key in _SMSTemplate.keys)
smsKeys.add(key);
return smsKeys;
}
String getValuePerKey(String key){
return _SMSTemplate['${key}'];
}
}
P.S. I have looked at some posts but I was completely bowled over since I am a Flutter Newbie.
Is there an easier way for this to happen.
Thanks,
The widget which would display the content from the http call
var templateMessagesDropDown = new DropdownButton<String>(
onChanged: (String newValue) {
setState(() {
templateMsgKey = newValue;
print("Selcted : ${templateMsgKey.toString()} ");
});
},
// value: _defaultTemplateValue,
style: textStyle,
//elevation: 1,
hint: Text("Please choose a template"),
isExpanded: true,
//
items: smsEngine.getKeys().map<DropdownMenuItem<String>>((String value) {
return DropdownMenuItem<String>(
value: value,
child: Text(value),
);
}).toList(),
);
I am wondering how best I can convert this Future to List
Future<List<TemplateInfoModel>> fetchTemplates() should return a List that you're expecting. You may want to consider either using FutureBuilder or StreamBuilder to update the UI elements on your screen. Or if you're not keen on using either of those, you can just call the Future and update the List on your current screen.
List<TemplateInfoModel> _listTemplateInfoModel = [];
...
fetchTemplates().then((value){
setState((){
_listTemplateInfoModel = value;
});
});

Neo4j/GraphQL augmented Schema

I'm trying to learn a bit about Neo4j's GraphQL integration. I'm using the GrandStack starter which can be found here. Grand Stack Starter. The starter doesn't use a lot of the boilerplate that you see in other GrapQL applications because, as I understand it, the Neo4j integration is supposed to bypass the need to double declare Schema and Resolvers. It uses Apollo Server and an "augmentedSchema" instead. Using the starter code I've tried to add some mutations to the schema, in this case, a deleteUser mutation. I keep getting an error, saying that the Mutation id defined more than once, when I know I'm only putting it in the code in one place. This is my schema.js:
import { neo4jgraphql } from "neo4j-graphql-js";
export const typeDefs = `
type User {
id: ID!
name: String
friends(first: Int = 10, offset: Int = 0): [User] #relation(name: "FRIENDS", direction: "BOTH")
reviews(first: Int = 10, offset: Int = 0): [Review] #relation(name: "WROTE", direction: "OUT")
avgStars: Float #cypher(statement: "MATCH (this)-[:WROTE]->(r:Review) RETURN toFloat(avg(r.stars))")
numReviews: Int #cypher(statement: "MATCH (this)-[:WROTE]->(r:Review) RETURN COUNT(r)")
}
type Business {
id: ID!
name: String
address: String
city: String
state: String
reviews(first: Int = 10, offset: Int = 0): [Review] #relation(name: "REVIEWS", direction: "IN")
categories(first: Int = 10, offset: Int =0): [Category] #relation(name: "IN_CATEGORY", direction: "OUT")
}
type Review {
id: ID!
stars: Int
text: String
business: Business #relation(name: "REVIEWS", direction: "OUT")
user: User #relation(name: "WROTE", direction: "IN")
}
type Category {
name: ID!
businesses(first: Int = 10, offset: Int = 0): [Business] #relation(name: "IN_CATEGORY", direction: "IN")
}
type Query {
users(id: ID, name: String, first: Int = 10, offset: Int = 0): [User]
businesses(id: ID, name: String, first: Int = 10, offset: Int = 0): [Business]
reviews(id: ID, stars: Int, first: Int = 10, offset: Int = 0): [Review]
category(name: ID!): Category
usersBySubstring(substring: String, first: Int = 10, offset: Int = 0): [User] #cypher(statement: "MATCH (u:User) WHERE u.name CONTAINS $substring RETURN u")
}
type Mutation {
deleteUser(id: ID!): User
}
`;
export const resolvers = {
Query: {
users: neo4jgraphql,
businesses: neo4jgraphql,
reviews: neo4jgraphql,
category: neo4jgraphql,
usersBySubstring: neo4jgraphql
},
Mutation: {
deleteUser: neo4jgraphql
}
};
Here's my index:
import { typeDefs, resolvers } from "./graphql-schema";
import { ApolloServer, gql, makeExecutableSchema } from "apollo-server";
import { v1 as neo4j } from "neo4j-driver";
import { augmentSchema } from "neo4j-graphql-js";
import dotenv from "dotenv";
dotenv.config();
const schema = makeExecutableSchema({
typeDefs,
resolvers
});
// augmentSchema will add autogenerated mutations based on types in schema
const augmentedSchema = augmentSchema(schema);
const driver = neo4j.driver(
process.env.NEO4J_URI || "bolt://localhost:7687",
neo4j.auth.basic(
process.env.NEO4J_USER || "neo4j",
process.env.NEO4J_PASSWORD || "neo4j"
)
);
if (driver){
console.log("Database Connected")
} else{
console.log("Database Connection Error")
}
const server = new ApolloServer({
// using augmentedSchema (executable GraphQLSchemaObject) instead of typeDefs and resolvers
//typeDefs,
//resolvers,
context: { driver },
// remove schema and uncomment typeDefs and resolvers above to use original (unaugmented) schema
schema: augmentedSchema
});
server.listen(process.env.GRAPHQL_LISTEN_PORT, '0.0.0.0').then(({ url }) => {
console.log(`GraphQL API ready at ${url}`);
});
Error Message:
No other code from the starter has been changed. I haven't been able to find much info on the augmentedSchema in the docs. If anyone can point me in the right direction I would appreciate it.
The issue was caused by the augmentedSchema, which automatically creates default mutations for each type i.e. in this case it had already created delete user, update user, and add user. You can see this when you run the GraphiQL server. So I was double declaring the delete user mutation.

Firestore query for string that contains the given keyword swift [duplicate]

I am looking to add a simple search field, would like to use something like
collectionRef.where('name', 'contains', 'searchTerm')
I tried using where('name', '==', '%searchTerm%'), but it didn't return anything.
I agree with #Kuba's answer, But still, it needs to add a small change to work perfectly for search by prefix. here what worked for me
For searching records starting with name queryText
collectionRef
.where('name', '>=', queryText)
.where('name', '<=', queryText+ '\uf8ff')
The character \uf8ff used in the query is a very high code point in the Unicode range (it is a Private Usage Area [PUA] code). Because it is after most regular characters in Unicode, the query matches all values that start with queryText.
Full-Text Search, Relevant Search, and Trigram Search!
UPDATE - 2/17/21 - I created several new Full Text Search Options.
See Code.Build for details.
Also, side note, dgraph now has websockets for realtime... wow, never saw that coming, what a treat! Cloud Dgraph - Amazing!
--Original Post--
A few notes here:
1.) \uf8ff works the same way as ~
2.) You can use a where clause or start end clauses:
ref.orderBy('title').startAt(term).endAt(term + '~');
is exactly the same as
ref.where('title', '>=', term).where('title', '<=', term + '~');
3.) No, it does not work if you reverse startAt() and endAt() in every combination, however, you can achieve the same result by creating a second search field that is reversed, and combining the results.
Example: First you have to save a reversed version of the field when the field is created. Something like this:
// collection
const postRef = db.collection('posts')
async function searchTitle(term) {
// reverse term
const termR = term.split("").reverse().join("");
// define queries
const titles = postRef.orderBy('title').startAt(term).endAt(term + '~').get();
const titlesR = postRef.orderBy('titleRev').startAt(termR).endAt(termR + '~').get();
// get queries
const [titleSnap, titlesRSnap] = await Promise.all([
titles,
titlesR
]);
return (titleSnap.docs).concat(titlesRSnap.docs);
}
With this, you can search the last letters of a string field and the first, just not random middle letters or groups of letters. This is closer to the desired result. However, this won't really help us when we want random middle letters or words. Also, remember to save everything lowercase, or a lowercase copy for searching, so case won't be an issue.
4.) If you have only a few words, Ken Tan's Method will do everything you want, or at least after you modify it slightly. However, with only a paragraph of text, you will exponentially create more than 1MB of data, which is bigger than firestore's document size limit (I know, I tested it).
5.) If you could combine array-contains (or some form of arrays) with the \uf8ff trick, you might could have a viable search that does not reach the limits. I tried every combination, even with maps, and a no go. Anyone figures this out, post it here.
6.) If you must get away from ALGOLIA and ELASTIC SEARCH, and I don't blame you at all, you could always use mySQL, postSQL, or neo4Js on Google Cloud. They are all 3 easy to set up, and they have free tiers. You would have one cloud function to save the data onCreate() and another onCall() function to search the data. Simple...ish. Why not just switch to mySQL then? The real-time data of course! When someone writes DGraph with websocks for real-time data, count me in!
Algolia and ElasticSearch were built to be search-only dbs, so there is nothing as quick... but you pay for it. Google, why do you lead us away from Google, and don't you follow MongoDB noSQL and allow searches?
There's no such operator, allowed ones are ==, <, <=, >, >=.
You can filter by prefixes only, for example for everything that starts between bar and foo you can use
collectionRef
.where('name', '>=', 'bar')
.where('name', '<=', 'foo')
You can use external service like Algolia or ElasticSearch for that.
While Kuba's answer is true as far as restrictions go, you can partially emulate this with a set-like structure:
{
'terms': {
'reebok': true,
'mens': true,
'tennis': true,
'racket': true
}
}
Now you can query with
collectionRef.where('terms.tennis', '==', true)
This works because Firestore will automatically create an index for every field. Unfortunately this doesn't work directly for compound queries because Firestore doesn't automatically create composite indexes.
You can still work around this by storing combinations of words but this gets ugly fast.
You're still probably better off with an outboard full text search.
While Firebase does not explicitly support searching for a term within a string,
Firebase does (now) support the following which will solve for your case and many others:
As of August 2018 they support array-contains query. See: https://firebase.googleblog.com/2018/08/better-arrays-in-cloud-firestore.html
You can now set all of your key terms into an array as a field then query for all documents that have an array that contains 'X'. You can use logical AND to make further comparisons for additional queries. (This is because firebase does not currently natively support compound queries for multiple array-contains queries so 'AND' sorting queries will have to be done on client end)
Using arrays in this style will allow them to be optimized for concurrent writes which is nice! Haven't tested that it supports batch requests (docs don't say) but I'd wager it does since its an official solution.
Usage:
collection("collectionPath").
where("searchTermsArray", "array-contains", "term").get()
Per the Firestore docs, Cloud Firestore doesn't support native indexing or search for text fields in documents. Additionally, downloading an entire collection to search for fields client-side isn't practical.
Third-party search solutions like Algolia and Elastic Search are recommended.
I'm sure Firebase will come out with "string-contains" soon to capture any index[i] startAt in the string...
But
I’ve researched the webs and found this solution thought of by someone else
set up your data like this
state = { title: "Knitting" };
// ...
const c = this.state.title.toLowerCase();
var array = [];
for (let i = 1; i < c.length + 1; i++) {
array.push(c.substring(0, i));
}
firebase
.firestore()
.collection("clubs")
.doc(documentId)
.update({
title: this.state.title,
titleAsArray: array
});
query like this
firebase.firestore()
.collection("clubs")
.where(
"titleAsArray",
"array-contains",
this.state.userQuery.toLowerCase()
)
As of today (18-Aug-2020), there are basically 3 different workarounds, which were suggested by the experts, as answers to the question.
I have tried them all. I thought it might be useful to document my experience with each one of them.
Method-A: Using: (dbField ">=" searchString) & (dbField "<=" searchString + "\uf8ff")
Suggested by #Kuba & #Ankit Prajapati
.where("dbField1", ">=", searchString)
.where("dbField1", "<=", searchString + "\uf8ff");
A.1 Firestore queries can only perform range filters (>, <, >=, <=) on a single field. Queries with range filters on multiple fields are not supported. By using this method, you can't have a range operator in any other field on the db, e.g. a date field.
A.2. This method does NOT work for searching in multiple fields at the same time. For example, you can't check if a search string is in any of the fileds (name, notes & address).
Method-B: Using a MAP of search strings with "true" for each entry in the map, & using the "==" operator in the queries
Suggested by #Gil Gilbert
document1 = {
'searchKeywordsMap': {
'Jam': true,
'Butter': true,
'Muhamed': true,
'Green District': true,
'Muhamed, Green District': true,
}
}
.where(`searchKeywordsMap.${searchString}`, "==", true);
B.1 Obviously, this method requires extra processing every time data is saved to the db, and more importantly, requires extra space to store the map of search strings.
B.2 If a Firestore query has a single condition like the one above, no index needs to be created beforehand. This solution would work just fine in this case.
B.3 However, if the query has another condition, e.g. (status === "active",) it seems that an index is required for each "search string" the user enters. In other words, if a user searches for "Jam" and another user searches for "Butter", an index should be created beforehand for the string "Jam", and another one for "Butter", etc. Unless you can predict all possible users' search strings, this does NOT work - in case of the query has other conditions!
.where(searchKeywordsMap["Jam"], "==", true); // requires an index on searchKeywordsMap["Jam"]
.where("status", "==", "active");
**Method-C: Using an ARRAY of search strings, & the "array-contains" operator
Suggested by #Albert Renshaw & demonstrated by #Nick Carducci
document1 = {
'searchKeywordsArray': [
'Jam',
'Butter',
'Muhamed',
'Green District',
'Muhamed, Green District',
]
}
.where("searchKeywordsArray", "array-contains", searchString);
C.1 Similar to Method-B, this method requires extra processing every time data is saved to the db, and more importantly, requires extra space to store the array of search strings.
C.2 Firestore queries can include at most one "array-contains" or "array-contains-any" clause in a compound query.
General Limitations:
None of these solutions seems to support searching for partial strings. For example, if a db field contains "1 Peter St, Green District", you can't search for the string "strict."
It is almost impossible to cover all possible combinations of expected search strings. For example, if a db field contains "1 Mohamed St, Green District", you may NOT be able to search for the string "Green Mohamed", which is a string having the words in a different order than the order used in the DB field.
There is no one solution that fits all. Each workaround has its limitations. I hope the information above can help you during the selection process between these workarounds.
For a list of Firestore query conditions, please check out the documentation https://firebase.google.com/docs/firestore/query-data/queries.
I have not tried https://fireblog.io/blog/post/firestore-full-text-search, which is suggested by #Jonathan.
Late answer but for anyone who's still looking for an answer, Let's say we have a collection of users and in each document of the collection we have a "username" field, so if want to find a document where the username starts with "al" we can do something like
FirebaseFirestore.getInstance()
.collection("users")
.whereGreaterThanOrEqualTo("username", "al")
I used trigram just like Jonathan said it.
trigrams are groups of 3 letters stored in a database to help with searching. so if I have data of users and I let' say I want to query 'trum' for donald trump I have to store it this way
and I just to recall this way
onPressed: () {
//LET SAY YOU TYPE FOR 'tru' for trump
List<String> search = ['tru', 'rum'];
Future<QuerySnapshot> inst = FirebaseFirestore.instance
.collection("users")
.where('trigram', arrayContainsAny: search)
.get();
print('result=');
inst.then((value) {
for (var i in value.docs) {
print(i.data()['name']);
}
});
that will get correct result no matter what
EDIT 05/2021:
Google Firebase now has an extension to implement Search with Algolia. Algolia is a full text search platform that has an extensive list of features. You are required to have a "Blaze" plan on Firebase and there are fees associated with Algolia queries, but this would be my recommended approach for production applications. If you prefer a free basic search, see my original answer below.
https://firebase.google.com/products/extensions/firestore-algolia-search
https://www.algolia.com
ORIGINAL ANSWER:
The selected answer only works for exact searches and is not natural user search behavior (searching for "apple" in "Joe ate an apple today" would not work).
I think Dan Fein's answer above should be ranked higher. If the String data you're searching through is short, you can save all substrings of the string in an array in your Document and then search through the array with Firebase's array_contains query. Firebase Documents are limited to 1 MiB (1,048,576 bytes) (Firebase Quotas and Limits) , which is about 1 million characters saved in a document (I think 1 character ~= 1 byte). Storing the substrings is fine as long as your document isn't close to 1 million mark.
Example to search user names:
Step 1: Add the following String extension to your project. This lets you easily break up a string into substrings. (I found this here).
extension String {
var length: Int {
return count
}
subscript (i: Int) -> String {
return self[i ..< i + 1]
}
func substring(fromIndex: Int) -> String {
return self[min(fromIndex, length) ..< length]
}
func substring(toIndex: Int) -> String {
return self[0 ..< max(0, toIndex)]
}
subscript (r: Range<Int>) -> String {
let range = Range(uncheckedBounds: (lower: max(0, min(length, r.lowerBound)),
upper: min(length, max(0, r.upperBound))))
let start = index(startIndex, offsetBy: range.lowerBound)
let end = index(start, offsetBy: range.upperBound - range.lowerBound)
return String(self[start ..< end])
}
Step 2: When you store a user's name, also store the result of this function as an array in the same Document. This creates all variations of the original text and stores them in an array. For example, the text input "Apple" would creates the following array: ["a", "p", "p", "l", "e", "ap", "pp", "pl", "le", "app", "ppl", "ple", "appl", "pple", "apple"], which should encompass all search criteria a user might enter. You can leave maximumStringSize as nil if you want all results, however, if there is long text, I would recommend capping it before the document size gets too big - somewhere around 15 works fine for me (most people don't search long phrases anyway).
func createSubstringArray(forText text: String, maximumStringSize: Int?) -> [String] {
var substringArray = [String]()
var characterCounter = 1
let textLowercased = text.lowercased()
let characterCount = text.count
for _ in 0...characterCount {
for x in 0...characterCount {
let lastCharacter = x + characterCounter
if lastCharacter <= characterCount {
let substring = textLowercased[x..<lastCharacter]
substringArray.append(substring)
}
}
characterCounter += 1
if let max = maximumStringSize, characterCounter > max {
break
}
}
print(substringArray)
return substringArray
}
Step 3: You can use Firebase's array_contains function!
[yourDatabasePath].whereField([savedSubstringArray], arrayContains: searchText).getDocuments....
I just had this problem and came up with a pretty simple solution.
String search = "ca";
Firestore.instance.collection("categories").orderBy("name").where("name",isGreaterThanOrEqualTo: search).where("name",isLessThanOrEqualTo: search+"z")
The isGreaterThanOrEqualTo lets us filter out the beginning of our search and by adding a "z" to the end of the isLessThanOrEqualTo we cap our search to not roll over to the next documents.
I actually think the best solution to do this within Firestore is to put all substrings in an array, and just do an array_contains query. This allows you to do substring matching. A bit overkill to store all substrings but if your search terms are short it's very very reasonable.
If you don't want to use a third-party service like Algolia, Firebase Cloud Functions are a great alternative. You can create a function that can receive an input parameter, process through the records server-side and then return the ones that match your criteria.
This worked for me perfectly but might cause performance issues.
Do this when querying firestore:
Future<QuerySnapshot> searchResults = collectionRef
.where('property', isGreaterThanOrEqualTo: searchQuery.toUpperCase())
.getDocuments();
Do this in your FutureBuilder:
return FutureBuilder(
future: searchResults,
builder: (context, snapshot) {
List<Model> searchResults = [];
snapshot.data.documents.forEach((doc) {
Model model = Model.fromDocumet(doc);
if (searchQuery.isNotEmpty &&
!model.property.toLowerCase().contains(searchQuery.toLowerCase())) {
return;
}
searchResults.add(model);
})
};
Following code snippet takes input from user and acquires data starting with the typed one.
Sample Data:
Under Firebase Collection 'Users'
user1: {name: 'Ali', age: 28},
user2: {name: 'Khan', age: 30},
user3: {name: 'Hassan', age: 26},
user4: {name: 'Adil', age: 32}
TextInput: A
Result:
{name: 'Ali', age: 28},
{name: 'Adil', age: 32}
let timer;
// method called onChangeText from TextInput
const textInputSearch = (text) => {
const inputStart = text.trim();
let lastLetterCode = inputStart.charCodeAt(inputStart.length-1);
lastLetterCode++;
const newLastLetter = String.fromCharCode(lastLetterCode);
const inputEnd = inputStart.slice(0,inputStart.length-1) + lastLetterCode;
clearTimeout(timer);
timer = setTimeout(() => {
firestore().collection('Users')
.where('name', '>=', inputStart)
.where('name', '<', inputEnd)
.limit(10)
.get()
.then(querySnapshot => {
const users = [];
querySnapshot.forEach(doc => {
users.push(doc.data());
})
setUsers(users); // Setting Respective State
});
}, 1000);
};
2021 Update
Took a few things from other answers. This one includes:
Multi word search using split (acts as OR)
Multi key search using flat
A bit limited on case-sensitivity, you can solve this by storing duplicate properties in uppercase. Ex: query.toUpperCase() user.last_name_upper
// query: searchable terms as string
let users = await searchResults("Bob Dylan", 'users');
async function searchResults(query = null, collection = 'users', keys = ['last_name', 'first_name', 'email']) {
let querySnapshot = { docs : [] };
try {
if (query) {
let search = async (query)=> {
let queryWords = query.trim().split(' ');
return queryWords.map((queryWord) => keys.map(async (key) =>
await firebase
.firestore()
.collection(collection)
.where(key, '>=', queryWord)
.where(key, '<=', queryWord + '\uf8ff')
.get())).flat();
}
let results = await search(query);
await (await Promise.all(results)).forEach((search) => {
querySnapshot.docs = querySnapshot.docs.concat(search.docs);
});
} else {
// No query
querySnapshot = await firebase
.firestore()
.collection(collection)
// Pagination (optional)
// .orderBy(sortField, sortOrder)
// .startAfter(startAfter)
// .limit(perPage)
.get();
}
} catch(err) {
console.log(err)
}
// Appends id and creates clean Array
const items = [];
querySnapshot.docs.forEach(doc => {
let item = doc.data();
item.id = doc.id;
items.push(item);
});
// Filters duplicates
return items.filter((v, i, a) => a.findIndex(t => (t.id === v.id)) === i);
}
Note: the number of Firebase calls is equivalent to the number of words in the query string * the number of keys you're searching on.
Same as #nicksarno but with a more polished code that doesn't need any extension:
Step 1
func getSubstrings(from string: String, maximumSubstringLenght: Int = .max) -> [Substring] {
let string = string.lowercased()
let stringLength = string.count
let stringStartIndex = string.startIndex
var substrings: [Substring] = []
for lowBound in 0..<stringLength {
for upBound in lowBound..<min(stringLength, lowBound+maximumSubstringLenght) {
let lowIndex = string.index(stringStartIndex, offsetBy: lowBound)
let upIndex = string.index(stringStartIndex, offsetBy: upBound)
substrings.append(string[lowIndex...upIndex])
}
}
return substrings
}
Step 2
let name = "Lorenzo"
ref.setData(["name": name, "nameSubstrings": getSubstrings(from: name)])
Step 3
Firestore.firestore().collection("Users")
.whereField("nameSubstrings", arrayContains: searchText)
.getDocuments...
With Firestore you can implement a full text search but it will still cost more reads than it would have otherwise, and also you'll need to enter and index the data in a particular way, So in this approach you can use firebase cloud functions to tokenise and then hash your input text while choosing a linear hash function h(x) that satisfies the following - if x < y < z then h(x) < h (y) < h(z). For tokenisation you can choose some lightweight NLP Libraries in order to keep the cold start time of your function low that can strip unnecessary words from your sentence. Then you can run a query with less than and greater than operator in Firestore.
While storing your data also, you'll have to make sure that you hash the text before storing it, and store the plain text also as if you change the plain text the hashed value will also change.
Typesense service provide substring search for Firebase Cloud Firestore database.
https://typesense.org/docs/guide/firebase-full-text-search.html
Following is the relevant codes of typesense integration for my project.
lib/utils/typesense.dart
import 'dart:convert';
import 'package:flutter_instagram_clone/model/PostModel.dart';
import 'package:http/http.dart' as http;
class Typesense {
static String baseUrl = 'http://typesense_server_ip:port/';
static String apiKey = 'xxxxxxxx'; // your Typesense API key
static String resource = 'collections/postData/documents/search';
static Future<List<PostModel>> search(String searchKey, int page, {int contentType=-1}) async {
if (searchKey.isEmpty) return [];
List<PostModel> _results = [];
var header = {'X-TYPESENSE-API-KEY': apiKey};
String strSearchKey4Url = searchKey.replaceFirst('#', '%23').replaceAll(' ', '%20');
String url = baseUrl +
resource +
'?q=${strSearchKey4Url}&query_by=postText&page=$page&sort_by=millisecondsTimestamp:desc&num_typos=0';
if(contentType==0)
{
url += "&filter_by=isSelling:false";
} else if(contentType == 1)
{
url += "&filter_by=isSelling:true";
}
var response = await http.get(Uri.parse(url), headers: header);
var data = json.decode(response.body);
for (var item in data['hits']) {
PostModel _post = PostModel.fromTypeSenseJson(item['document']);
if (searchKey.contains('#')) {
if (_post.postText.toLowerCase().contains(searchKey.toLowerCase()))
_results.add(_post);
} else {
_results.add(_post);
}
}
print(_results.length);
return _results;
}
static Future<List<PostModel>> getHubPosts(String searchKey, int page,
{List<String>? authors, bool? isSelling}) async {
List<PostModel> _results = [];
var header = {'X-TYPESENSE-API-KEY': apiKey};
String filter = "";
if (authors != null || isSelling != null) {
filter += "&filter_by=";
if (isSelling != null) {
filter += "isSelling:$isSelling";
if (authors != null && authors.isNotEmpty) {
filter += "&&";
}
}
if (authors != null && authors.isNotEmpty) {
filter += "authorID:$authors";
}
}
String url = baseUrl +
resource +
'?q=${searchKey.replaceFirst('#', '%23')}&query_by=postText&page=$page&sort_by=millisecondsTimestamp:desc&num_typos=0$filter';
var response = await http.get(Uri.parse(url), headers: header);
var data = json.decode(response.body);
for (var item in data['hits']) {
PostModel _post = PostModel.fromTypeSenseJson(item['document']);
_results.add(_post);
}
print(_results.length);
return _results;
}
}
lib/services/hubDetailsService.dart
import 'package:flutter/material.dart';
import 'package:flutter_instagram_clone/model/PostModel.dart';
import 'package:flutter_instagram_clone/utils/typesense.dart';
class HubDetailsService with ChangeNotifier {
String searchKey = '';
List<String>? authors;
bool? isSelling;
int nContentType=-1;
bool isLoading = false;
List<PostModel> hubResults = [];
int _page = 1;
bool isMore = true;
bool noResult = false;
Future initSearch() async {
isLoading = true;
isMore = true;
noResult = false;
hubResults = [];
_page = 1;
List<PostModel> _results = await Typesense.search(searchKey, _page, contentType: nContentType);
for(var item in _results) {
hubResults.add(item);
}
isLoading = false;
if(_results.length < 10) isMore = false;
if(_results.isEmpty) noResult = true;
notifyListeners();
}
Future nextPage() async {
if(!isMore) return;
_page++;
List<PostModel> _results = await Typesense.search(searchKey, _page);
hubResults.addAll(_results);
if(_results.isEmpty) {
isMore = false;
}
notifyListeners();
}
Future refreshPage() async {
isLoading = true;
notifyListeners();
await initSearch();
isLoading = false;
notifyListeners();
}
Future search(String _searchKey) async {
isLoading = true;
notifyListeners();
searchKey = _searchKey;
await initSearch();
isLoading = false;
notifyListeners();
}
}
lib/ui/hub/hubDetailsScreen.dart
import 'package:flutter/cupertino.dart';
import 'package:flutter/material.dart';
import 'package:flutter_instagram_clone/constants.dart';
import 'package:flutter_instagram_clone/main.dart';
import 'package:flutter_instagram_clone/model/MessageData.dart';
import 'package:flutter_instagram_clone/model/SocialReactionModel.dart';
import 'package:flutter_instagram_clone/model/User.dart';
import 'package:flutter_instagram_clone/model/hubModel.dart';
import 'package:flutter_instagram_clone/services/FirebaseHelper.dart';
import 'package:flutter_instagram_clone/services/HubService.dart';
import 'package:flutter_instagram_clone/services/helper.dart';
import 'package:flutter_instagram_clone/services/hubDetailsService.dart';
import 'package:flutter_instagram_clone/ui/fullScreenImageViewer/FullScreenImageViewer.dart';
import 'package:flutter_instagram_clone/ui/home/HomeScreen.dart';
import 'package:flutter_instagram_clone/ui/hub/editHubScreen.dart';
import 'package:provider/provider.dart';
import 'package:smooth_page_indicator/smooth_page_indicator.dart';
class HubDetailsScreen extends StatefulWidget {
final HubModel hub;
HubDetailsScreen(this.hub);
#override
_HubDetailsScreenState createState() => _HubDetailsScreenState();
}
class _HubDetailsScreenState extends State<HubDetailsScreen> {
late HubDetailsService _service;
List<SocialReactionModel?> _reactionsList = [];
final fireStoreUtils = FireStoreUtils();
late Future<List<SocialReactionModel>> _myReactions;
final scrollController = ScrollController();
bool _isSubLoading = false;
#override
void initState() {
// TODO: implement initState
super.initState();
_service = Provider.of<HubDetailsService>(context, listen: false);
print(_service.isLoading);
init();
}
init() async {
_service.searchKey = "";
if(widget.hub.contentWords.length>0)
{
for(var item in widget.hub.contentWords) {
_service.searchKey += item + " ";
}
}
switch(widget.hub.contentType) {
case 'All':
break;
case 'Marketplace':
_service.isSelling = true;
_service.nContentType = 1;
break;
case 'Post Only':
_service.isSelling = false;
_service.nContentType = 0;
break;
case 'Keywords':
break;
}
for(var item in widget.hub.exceptWords) {
if(item == 'Marketplace') {
_service.isSelling = _service.isSelling != null?true:false;
} else {
_service.searchKey += "-" + item + "";
}
}
if(widget.hub.fromUserType == 'Followers') {
List<User> _followers = await fireStoreUtils.getFollowers(MyAppState.currentUser!.userID);
_service.authors = [];
for(var item in _followers)
_service.authors!.add(item.userID);
}
if(widget.hub.fromUserType == 'Selected') {
_service.authors = widget.hub.fromUserIds;
}
_service.initSearch();
_myReactions = fireStoreUtils.getMyReactions()
..then((value) {
_reactionsList.addAll(value);
});
scrollController.addListener(pagination);
}
void pagination(){
if(scrollController.position.pixels ==
scrollController.position.maxScrollExtent) {
_service.nextPage();
}
}
#override
Widget build(BuildContext context) {
Provider.of<HubDetailsService>(context);
PageController _controller = PageController(
initialPage: 0,
);
return Scaffold(
backgroundColor: Colors.white,
body: RefreshIndicator(
onRefresh: () async {
_service.refreshPage();
},
child: CustomScrollView(
controller: scrollController,
slivers: [
SliverAppBar(
centerTitle: false,
expandedHeight: MediaQuery.of(context).size.height * 0.25,
pinned: true,
backgroundColor: Colors.white,
title: Row(
mainAxisAlignment: MainAxisAlignment.spaceBetween,
children: [
InkWell(
onTap: (){
Navigator.pop(context);
},
child: Container(
width: 35, height: 35,
decoration: BoxDecoration(
color: Colors.white,
borderRadius: BorderRadius.circular(20)
),
child: Center(
child: Icon(Icons.arrow_back),
),
),
),
if(widget.hub.user.userID == MyAppState.currentUser!.userID)
InkWell(
onTap: () async {
var _hub = await push(context, EditHubScreen(widget.hub));
if(_hub != null) {
Navigator.pop(context, true);
}
},
child: Container(
width: 35, height: 35,
decoration: BoxDecoration(
color: Colors.white,
borderRadius: BorderRadius.circular(20)
),
child: Center(
child: Icon(Icons.edit, color: Colors.black, size: 20,),
),
),
),
],
),
automaticallyImplyLeading: false,
flexibleSpace: FlexibleSpaceBar(
collapseMode: CollapseMode.pin,
background: Container(color: Colors.grey,
child: Stack(
children: [
PageView.builder(
controller: _controller,
itemCount: widget.hub.medias.length,
itemBuilder: (context, index) {
Url postMedia = widget.hub.medias[index];
return GestureDetector(
onTap: () => push(
context,
FullScreenImageViewer(
imageUrl: postMedia.url)),
child: displayPostImage(postMedia.url));
}),
if (widget.hub.medias.length > 1)
Padding(
padding: const EdgeInsets.only(bottom: 30.0),
child: Align(
alignment: Alignment.bottomCenter,
child: SmoothPageIndicator(
controller: _controller,
count: widget.hub.medias.length,
effect: ScrollingDotsEffect(
dotWidth: 6,
dotHeight: 6,
dotColor: isDarkMode(context)
? Colors.white54
: Colors.black54,
activeDotColor: Color(COLOR_PRIMARY)),
),
),
),
],
),
)
),
),
_service.isLoading?
SliverFillRemaining(
child: Center(
child: CircularProgressIndicator(),
),
):
SliverList(
delegate: SliverChildListDelegate([
if(widget.hub.userId != MyAppState.currentUser!.userID)
_isSubLoading?
Center(
child: Padding(
padding: EdgeInsets.all(5),
child: CircularProgressIndicator(),
),
):
Padding(
padding: EdgeInsets.symmetric(horizontal: 5),
child: widget.hub.shareUserIds.contains(MyAppState.currentUser!.userID)?
ElevatedButton(
onPressed: () async {
setState(() {
_isSubLoading = true;
});
await Provider.of<HubService>(context, listen: false).unsubscribe(widget.hub);
setState(() {
_isSubLoading = false;
widget.hub.shareUserIds.remove(MyAppState.currentUser!.userID);
});
},
style: ElevatedButton.styleFrom(
primary: Colors.red
),
child: Text(
"Unsubscribe",
),
):
ElevatedButton(
onPressed: () async {
setState(() {
_isSubLoading = true;
});
await Provider.of<HubService>(context, listen: false).subscribe(widget.hub);
setState(() {
_isSubLoading = false;
widget.hub.shareUserIds.add(MyAppState.currentUser!.userID);
});
},
style: ElevatedButton.styleFrom(
primary: Colors.green
),
child: Text(
"Subscribe",
),
),
),
Padding(
padding: EdgeInsets.all(15,),
child: Text(
widget.hub.name,
style: TextStyle(
color: Colors.black,
fontSize: 18,
fontWeight: FontWeight.bold
),
),
),
..._service.hubResults.map((e) {
if(e.isAuction && (e.auctionEnded || DateTime.now().isAfter(e.auctionEndTime??DateTime.now()))) {
return Container();
}
return PostWidget(post: e);
}).toList(),
if(_service.noResult)
Padding(
padding: EdgeInsets.all(20),
child: Text(
'No results for this hub',
style: TextStyle(
fontSize: 18,
fontWeight: FontWeight.bold
),
),
),
if(_service.isMore)
Center(
child: Container(
padding: EdgeInsets.all(5),
child: CircularProgressIndicator(),
),
)
]),
)
],
),
)
);
}
}
You can try using 2 lambdas and S3. These resources are very cheap and you will only be charged once the app has extreme usage ( if the business model is good then high usage -> higher income).
The first lambda will be used to push a text-document mapping to an S3 json file.
the second lambda will basically be your search api, you will use it to query the JSON in s3 and return the results.
The drawback will probably be the latency from s3 to lambda.
I use this with Vue js
query(collection(db,'collection'),where("name",">=",'searchTerm'),where("name","<=","~"))
I also couldn't manage to create a search function to Firebase using the suggestions and Firebase tools so I created my own "field-string contains search-string(substring) check", using the .contains() Kotlin function:
firestoreDB.collection("products")
.get().addOnCompleteListener { task->
if (task.isSuccessful){
val document = task.result
if (!document.isEmpty) {
if (document != null) {
for (documents in document) {
var name = documents.getString("name")
var type = documents.getString("type")
if (name != null && type != null) {
if (name.contains(text, ignoreCase = true) || type.contains(text, ignoreCase = true)) {
// do whatever you want with the document
} else {
showNoProductsMsg()
}
}
}
}
binding.progressBarSearch.visibility = View.INVISIBLE
} else {
showNoProductsMsg()
}
} else{
showNoProductsMsg()
}
}
First, you get ALL the documents in the collection you want, then you filter them using:
for (documents in document) {
var name = documents.getString("name")
var type = documents.getString("type")
if (name != null && type != null) {
if (name.contains(text, ignoreCase = true) || type.contains(text, ignoreCase = true)) {
//do whatever you want with this document
} else {
showNoProductsMsg()
}
}
}
In my case, I filtered them all by the name of the product and its type, then I used the boolean name.contains(string, ignoreCase = true) OR type.contains(string, ignoreCase = true, string is the text I got in the search bar of my app and I recommend you to use ignoreCase = true. With this setence being true, you can do whatever you want with the document.
I guess this is the best workaround since Firestore only supports number and exacts strings queries, so if your code didn't work doing this:
collection.whereGreaterThanOrEqualTo("name", querySearch)
collection.whereLessThanOrEqualTo("name", querySearch)
You're welcome :) because what I did works!
Firebase suggests Algolia or ElasticSearch for Full-Text search, but a cheaper alternative might be MongoDB. The cheapest cluster (approx US$10/mth) allows you to index for full-text.
We can use the back-tick to print out the value of a string. This should work:
where('name', '==', `${searchTerm}`)

Resources