Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 5 years ago.
Improve this question
I am trying to connect my mysql database to my xcode project, for a simple login view where users can put in their username and password. I have the database set up already but I need to use JSON or?
<?php
// Create connection
$con=mysqli_connect("localhost","username","password","dbname");
// Check connection
if (mysqli_connect_errno())
{
echo "Failed to connect to MySQL: " . mysqli_connect_error();
}
// This SQL statement selects ALL from the table 'Locations'
$sql = "SELECT * FROM Locations";
// Check if there are results
if ($result = mysqli_query($con, $sql))
{
// If so, then create a results array and a temporary one
// to hold the data
$resultArray = array();
$tempArray = array();
// Loop through each row in the result set
while($row = $result->fetch_object())
{
// Add each row into our results array
$tempArray = $row;
array_push($resultArray, $tempArray);
}
// Finally, encode the array to JSON and output the results
echo json_encode($resultArray);
}
// Close connections
mysqli_close($con);
?>
Here is a good article on this:
[1]: http://codewithchris.com/iphone-app-connect-to-mysql-database/#parsejson Xcode JSON
I haven't spent too much time with xcode, but from what I know there is no native API/framework for connecting to a MySQL database with it (along with most other frameworks of the kind). So yes - you need to set up a website/server with a database that has a web page that generates JSON code. Then have the xcode send a web request to the server/webpage, store the JSON code in a variable, and then use whatever method xcodes have to use/manipulate the JSON data. You'll want to make sure that both the xcode and web server don't prevent one another from passing data with the appropriate security privileges.
You can pass the data to your xcode app in whatever format you want as well - JSON, CSV, or even XML. It's whatever capabilities xcode has is what you should stick with. Really you could even make up your own data format... it's your application, do what you want with it, whatever is easiest to work with in xcode.
Related
I am running into a problem where phpspreadsheet will not write out a line chart from an xlsx file I'm using as a template.
I get two error messages when opening the spreadsheet:
We found a problem with some content in 'Hello World.xlsx'. Do you want us to try to recover as much as we can? If you trust the source of the workbook, click Yes
Hello World.xlsx is locked for editing
by 'another user'
Open 'Read-Only' or click 'Notify' to open read-only and receive notification when the document is no longer in use.
<?php
// ... snip
$reader = IOFactory::createReader($inputFileType);
$reader->setIncludeCharts(true);
$spreadsheet = $reader->load($inputFileName);
// ... snip
// populate chart data
$sheet = $spreadsheet->getSheetByName('Ratios');
// reverse order, so populate "backwards"
for($endingRow = 14; ($endingRow > 1) && ($row = $last13Ratios->nextRecord()); $endingRow--) {
$sheet->setCellValue('A'.$endingRow, $row->date);
$sheet->setCellValue('B'.$endingRow, $row->percent60/100);
$sheet->setCellValue('C'.$endingRow, $row->percent90/100);
$helper->log(print_r($row));
}
// ... snip
$writer = new XlsxWriter($spreadsheet);
$writer->setIncludeCharts(true);
$callStartTime = microtime(true);
$writer->save($saveFileName);
$helper->logWrite($writer, $saveFileName, $callStartTime);
$spreadsheet->disconnectWorksheets();
die();
If I put fake data in the template, it appears to work. But I don't want fake data, just in case something goes haywire. Better to have no information than wrong information.
The short answer is to put a space ' in the upper left cell of the section being used for data in the template.
Note that the data type appears to make a difference, also: a date instead of a string will fail.
Be sure to use the single quote mark to designate it as a string, otherwise excel will ignore the space when you save it.
I've found a ton of information on LocalStorage with HTML5 but they all focus on persistent single entries being saved.
I need to be able to have a contact form (simple name/email/phone) that gets saved to the iPad and then allows another person to submit an entry to save to the iPad locally (no Wifi/Internet available).
Then I want to be able to go in later and grab all the entries that were made in whatever format available.
Any direction & help would be appreciated.
I searched on Stackoverflow & Google but still couldn't find multiple entry tutorials or examples.
Thanks!
localStorage is a good fit for what you're after. As localStorage only supports strings, you will need to do some conversions to/from JSON to serialize the entries.
Here is some general code to hopefully get you started:
// read out any previous contact forms (initialise if it is empty)
var jsonEntries = localStorage["contactFormEntries"];
var contactFormEntries = JSON.parse(jsonEntries ? jsonEntries : '[]');
// ...
// sometime later... create a contact form record
var contactForm = {
'name': 'Timmy',
'email': 'timmy#example.com'
};
// add it to the entries array
contactFormEntries.push(contactForm);
// serialize all the contact form entries into local storage
localStorage["contactFormEntries"] = JSON.stringify(contactFormEntries);
// ...
// then, when you want to send the entries, read them all out again
var contactFormEntries = JSON.parse(localStorage["contactFormEntries"]);
Recently, a program that creates an Access db (a requirement of our downstream partner), adds a table with all memo columns, and then inserts a bunch of records stopped working. Oddly, there were no changes in the environment that I could see and nothing in any diffs that could have affected it. Furthermore, this repros on any machine I've tried, whether it has Office or not and if it has Office, whether it's 32- or 64-bit.
The problem is that when you open the db after the program runs, the destination table is empty and instead there's a MSysCompactError table with a bunch of rows.
Here's the distilled code:
var connectionString = "Provider=Microsoft.Jet.OLEDB.4.0;Data Source=corrupt.mdb;Jet OLEDB:Engine Type=5";
// create the db and make a table
var cat = new ADOX.Catalog();
try
{
cat.Create(connectionString);
var tbl = new ADOX.Table();
try
{
tbl.Name = "tbl";
tbl.Columns.Append("a", ADOX.DataTypeEnum.adLongVarWChar);
cat.Tables.Append(tbl);
}
finally
{
Marshal.ReleaseComObject(tbl);
}
}
finally
{
cat.ActiveConnection.Close();
Marshal.ReleaseComObject(cat);
}
using (var connection = new OleDbConnection(connectionString))
{
connection.Open();
// insert a value
using (var cmd = new OleDbCommand("INSERT INTO [tbl] VALUES ( 'x' )", connection))
cmd.ExecuteNonQuery();
}
Here are a couple of workarounds I've stumbled into:
If you insert a breakpoint between creating the table and inserting the value (line 28 above), and you open the mdb with Access and close it again, then when the app continues it will not corrupt the database.
Changing the engine type from 5 to 4 (line 1) will create an uncorrupted mdb. You end up with an obsolete mdb version but the table has values and there's no MSysCompactError. Note that I've tried creating a database this way and then upgrading it to 5 programmatically at the end with no luck. I end up with a corrupt db in the newest version.
If you change from memo to text fields by changing the adLongVarWChar on line 13 to adVarWChar, then the database isn't corrupt. You end up with text fields in the db instead of memo, though.
A final note: in my travels, I've seen that MSysCompactError is related to compacting the database, but I'm not doing anything explicit to make the db compact.
Any ideas?
As I replied to HasUp:
According MS support, creation of Jet databases programmatically is deprecated. I ended up checking in an empty model database and then copying it whenever I needed a new one. See http://support.microsoft.com/kb/318559 for more info.
I want to use amazon Dynamo DB with rails.But I have not found a way to implement pagination.
I will use AWS::Record::HashModel as ORM.
This ORM supports limits like this:
People.limit(10).each {|person| ... }
But I could not figured out how to implement following MySql query in Dynamo DB.
SELECT *
FROM `People`
LIMIT 1 , 30
You issue queries using LIMIT. If the subset returned does not contain the full table, a LastEvaluatedKey value is returned. You use this value as the ExclusiveStartKey in the next query. And so on...
From the DynamoDB Developer Guide.
You can provide 'page-size' in you query to set the result set size.
The response of DynamoDB contains 'LastEvaluatedKey' which will indicate the last key as per the page size. If response does't contain 'LastEvaluatedKey' it means there are no results left to fetch.
Use the 'LastEvaluatedKey' as 'ExclusiveStartKey' while fetching next time.
I hope this helps.
DynamoDB Pagination
Here's a simple copy-paste-run proof of concept (Node.js) for stateless forward/reverse navigation with dynamodb. In summary; each response includes the navigation history, allowing user to explicitly and consistently request either the next or previous page (while next/prev params exist):
GET /accounts -> first page
GET /accounts?next=A3r0ijKJ8 -> next page
GET /accounts?prev=R4tY69kUI -> previous page
Considerations:
If your ids are large and/or users might do a lot of navigation, then the potential size of the next/prev params might become too large.
Yes you do have to store the entire reverse path - if you only store the previous page marker (per some other answers) you will only be able to go back one page.
It won't handle changing pageSize midway, consider baking pageSize into the next/prev value.
base64 encode the next/prev values, and you could also encrypt.
Scans are inefficient, while this suited my current requirement it won't suit all!
// demo.js
const mockTable = [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20]
const getPagedItems = (pageSize = 5, cursor = {}) => {
// Parse cursor
const keys = cursor.next || cursor.prev || [] // fwd first
let key = keys[keys.length-1] || null // eg ddb's PK
// Mock query (mimic dynamodb response)
const Items = mockTable.slice(parseInt(key) || 0, pageSize+key)
const LastEvaluatedKey = Items[Items.length-1] < mockTable.length
? Items[Items.length-1] : null
// Build response
const res = {items:Items}
if (keys.length > 0) // add reverse nav keys (if any)
res.prev = keys.slice(0, keys.length-1)
if (LastEvaluatedKey) // add forward nav keys (if any)
res.next = [...keys, LastEvaluatedKey]
return res
}
// Run test ------------------------------------
const runTest = () => {
const PAGE_SIZE = 6
let x = {}, i = 0
// Page to end
while (i == 0 || x.next) {
x = getPagedItems(PAGE_SIZE, {next:x.next})
console.log(`Page ${++i}: `, x.items)
}
// Page back to start
while (x.prev) {
x = getPagedItems(PAGE_SIZE, {prev:x.prev})
console.log(`Page ${--i}: `, x.items)
}
}
runTest()
I faced a similar problem.
The generic pagination approach is, use "start index" or "start page" and the "page length".
The "ExclusiveStartKey" and "LastEvaluatedKey" based approach is very DynamoDB specific.
I feel this DynamoDB specific implementation of pagination should be hidden from the API client/UI.
Also in case, the application is serverless, using service like Lambda, it will be not be possible to maintain the state on the server. The other side is the client implementation will become very complex.
I came with a different approach, which I think is generic ( and not specific to DynamoDB)
When the API client specifies the start index, fetch all the keys from
the table and store it into an array.
Find out the key for the start index from the array, which is
specified by the client.
Make use of the ExclusiveStartKey and fetch the number of records, as
specified in the page length.
If the start index parameter is not present, the above steps are not
needed, we don't need to specify the ExclusiveStartKey in the scan
operation.
This solution has some drawbacks -
We will need to fetch all the keys when the user needs pagination with
start index.
We will need additional memory to store the Ids and the indexes.
Additional database scan operations ( one or multiple to fetch the
keys )
But I feel this will be very easy approach for the clients, which are using our APIs. The backward scan will work seamlessly. If the user wants to see "nth" page, this will be possible.
In fact I faced the same problem and I noticed that LastEvaluatedKey and ExclusiveStartKey are not working well especially when using Scan So I solved Like this.
GET/?page_no=1&page_size=10 =====> first page
response will contain count of records and first 10 records
retry and increase number of page until all record come.
Code is below
PS: I am using python
first_index = ((page_no-1)*page_size)
second_index = (page_no*page_size)
if (second_index > len(response['Items'])):
second_index = len(response['Items'])
return {
'statusCode': 200,
'count': response['Count'],
'response': response['Items'][first_index:second_index]
}
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 6 years ago.
Improve this question
I have a database full of messages from a bulletin board. The board uses BB codes as formatting style. I.e.:
I'm not formatted
This is [b]bold[/b] text
Tags can also [i][b]be[/b] nested[/i]
And the [b]nesting [i]can be[/b] rather[/i] ugly
My ultimate goal is to convert these messages to some well formed XML (no discussion here ;) ). I don't want to use regular expression, which will fail at some point (in fact: it does).
First step: parse a message into some kind of internal representation (a graph, a tree, etc.). And I'm stuck at this point. The actual extraction is not that big problem, but the storage is.
How do I represent this kind of markup into some meaningful structure. My problem seems to be similar (or almost identical) to a browser building a DOM from a HTML file. So I think there are some strategies to solve it. I know the solution will not be perfect but im willing to invest a vast amount of time to do build the best possible.
Question: Do you have any tips/hint/comments? Any articles or paper you can recommend? Or a book which discusses these topic? I'm grateful for any input.
And the [b]nesting [i]can be[/b] rather[/i] ugly
I've written a parser very similar to what you are looking to do except that it would throw an error on your fourth example. Something to the effect of "Unexpected end tag [/b] within [i]".
I think that what you want to do is very doable but internally you will want to create a tree as if your original text was:
"And the [b]nesting [i]can be[/i][/b][i] rather[/i] ugly". (I don't think this would be necessary if you didn't need to convert it to XML later. If there were no need to convert to XML you could keep a linked list of text sections where each section is marked with its format combination)
Two possible approaches to this problem come to mind (of course there could be better possibilities). 1) Preprocess and insert the missing end and begin tags where necessary. 2) Build your parse tree and where there are overlapping tags imply the missing ones based on the current context. I think approach number (2) would be simpler and cleaner.
You could model your tree based on a composite pattern where you have an AbstractElement class, a TextElement class that extends AbstractElement, and a Tag class that extends AbstractElement and contains a list of sub-elements of type AbstractElement.
You would start by creating a root Tag instance. You would then call rootTag.parse(text). You would need a scanner that could return 3 types of tokens: text, start-tags, and end-tags. The scanner would allow you to push tokens onto it, which it would return before any normal scanned token. This would allow you to push new start tag tokens on after encountering and dealing with the unexpected end tag. You would also have to know when you are done with input. I'll use a 4th token type for that.
/* methods within class Tag */
public void parse(String text) {
MyScanner scanner = new MyScanner(text);
parse(scanner);
}
/* returns next token */
private Token parse(MyScanner scanner) {
Token firstToken = scanner.getNextToken();
return parse(scanner,firstToken);
}
private Token parse(MyScanner scanner) {
Token firstToken = scanner.getNextToken();
return parse(scanner,firstToken);
}
private Token parse(MyScanner scanner, Token token) {
while (!token.isDone() && !token.isEndTag()) {
if (token.isStartTag()) {
Tag subTag = new Tag(token.getValue());
token = scanner.getNextToken();
token = subTag.parse(scanner,token);
addElement(subTag);
}
else {
TextElement text = new TextElement(token.getValue());
addElement(text);
token = scanner.getNextToken();
}
}
if (token.isEndTag()) {
if (!token.getValue().equals(getName()) {
scanner.push(new Token(Token.START_TAG,token.getValue()));
}
else {
token = scanner.getNextToken();
}
}
return token;
}
So if you were to parse "And the [b]nesting [i]can be[/b] rather[/i] ugly", The following should get created.
rootTag.parse should be adding:
TextElement: "And the "
Tag: "b"
TextElement: "nesting "
Tag: "i"
TextElement: "can be"
(... at this point the odd [/b] is encountered ...)
(... push "i" start tag on the scanner ...)
(... here the [/b] is encountered (again) ...)
Tag: "i" (this was scanned because it had been pushed to the scanner)
TextElement: " rather"
TextElement: " ugly"
Note: Coding within a text area does not lend itself well to testing and debugging. Accept this answer as a hint or a possibility, not as your definate answer.