What does the DoFixture's check function expect as input? - fitnesse

I am going crazy here so bear with me...
We are using Fitnesse (with DbFit framework / based on FIT) to automatise some tests in which we run some shell commands. We have a fixture which connects to the linux server, runs the command and returns the results (see bellow)
class SSHConnection {
private static final String DEFAULT_KEY = "~/.ssh/id_rsa";
private String host;
private String password;
private int port;
private String privateKey;
private Session session;
private String user;
/** Creates a JSch object and open a connection with a remote server using the values set in the class variables.
*/
public void connect() {.........}
/**
* Executes a command in the remote server using the connection opened by connect().
* #param command command to execute in the remote server
* #return the output result of the command as String
*/
public String execute(String command) {
logger.info("Executing command: " + command);
String result;
try {
ChannelExec channelExec = (ChannelExec) session.openChannel("exec");
channelExec.setCommand(command);
// Open exec channel
channelExec.connect();
InputStream stream = channelExec.getInputStream();
BufferedReader reader = new BufferedReader(new InputStreamReader(stream));
StringBuilder output = new StringBuilder();
String line;
while ((line = reader.readLine()) != null) {
output.append(line).append("\n");
}
result = output.toString();
// Close exec channel
channelExec.disconnect();
logger.debug("Command executed successfully: " + result);
} catch (JSchException | IOException e) {
logger.error("Error executing command: " + e.getMessage());
e.printStackTrace();
return "";
}
return result;
}}
So I'm expecting whatever get's displayed on the shell after running the command to be returned (as a string) and compared to whatever my test in fitnesse requires.
Fitnesse catches the results but always fails the comparison and I don't know why (I only added the sed command to remove the whitespaces, but still the comparison fails!!
I feel like Fitnesse is mocking me showing me the same value for expected, actual and diff.
Is it and encoding issue? is it a java type issue? How does check work?
Edit: I even tried running the shell command twice and saving the result the first time and then set it as the expected results. It still fails.
|set | VarAddress | run command | cat AddressNoSpaces.txt |
|check| run command | cat AddressNoSpaces.txt | #{VarAddress} |

OK problem solved, it seems that the shell command output added a new line char which fitnesse did not like. I changed that java class to strip the return value from it's last char and it's working.

Related

Cloud Dataflow Custom Template creation Issues

I am trying to create a template for cloud data flow job that reads json file from cloud storage and writes to Big Query. I am passing 2 runtime arguments : 1. InputFile for GCS location 2. Dataset and Table Id of BigQuery.
JsonTextToBqTemplate code:
public class JsonTextToBqTemplate {
private static final Logger logger =
LoggerFactory.getLogger(TextToBQTemplate.class);
private static Gson gson = new GsonBuilder().create();
public static void main(String[] args) throws Exception {
JsonToBQTemplateOptions options =
PipelineOptionsFactory.fromArgs(args).withValidation()
.as(JsonToBQTemplateOptions.class);
String jobName = options.getJobName();
try {
logger.info("PIPELINE-INFO: jobName={} message={} ",
jobName, "starting pipeline creation");
Pipeline pipeline = Pipeline.create(options);
pipeline.apply("ReadLines", TextIO.read().from(options.getInputFile()))
.apply("Converting to TableRows", ParDo.of(new DoFn<String, TableRow>() {
private static final long serialVersionUID = 0;
#ProcessElement
public void processElement(ProcessContext c) {
String json = c.element();
TableRow tableRow = gson.fromJson(json, TableRow.class);
c.output(tableRow);
}
}))
.apply(BigQueryIO.writeTableRows().to(options.getTableSpec())
.withCreateDisposition(CreateDisposition.CREATE_NEVER)
.withWriteDisposition(BigQueryIO.Write.WriteDisposition.WRITE_APPEND));
logger.info("PIPELINE-INFO: jobName={} message={} ", jobName, "pipeline started");
State state = pipeline.run().waitUntilFinish();
logger.info("PIPELINE-INFO: jobName={} message={} ", jobName, "pipeline status" + state);
} catch (Exception exception) {
throw exception;
}
}
}
Options Code:
public interface JsonToBQTemplateOptions extends PipelineOptions{
ValueProvider<String> getInputFile();
void setInputFile(ValueProvider<String> value);
ValueProvider<String> getErrorOutput();
void setErrorOutput(ValueProvider<String> value);
ValueProvider<String> getTableSpec();
void setTableSpec(ValueProvider<String> value);
}
Maven command to create template:
mvn -X compile exec:java \
-Dexec.mainClass=com.xyz.adp.pipeline.template.JsonTextToBqTemplate \
-Dexec.args="--project=xxxxxx-12356 \
--stagingLocation=gs://xxx-test/template/staging/jsontobq/ \
--tempLocation=gs://xxx-test/temp/ \
--templateLocation=gs://xxx-test/template/templates/jsontobq \
--errorOutput=gs://xxx-test/template/output"
Error:
Caused by: java.lang.IllegalStateException: Cannot estimate size of a FileBasedSource with inaccessible file pattern: {}. [RuntimeValueProvider{propertyName=inputFile, default=null, value=null}]
at org.apache.beam.sdk.repackaged.com.google.common.base.Preconditions.checkState(Preconditions.java:518)
at org.apache.beam.sdk.io.FileBasedSource.getEstimatedSizeBytes(FileBasedSource.java:199)
at org.apache.beam.runners.direct.BoundedReadEvaluatorFactory$InputProvider.getInitialInputs(BoundedReadEvaluatorFactory.java:207)
at org.apache.beam.runners.direct.ReadEvaluatorFactory$InputProvider.getInitialInputs(ReadEvaluatorFactory.java:87)
at org.apache.beam.runners.direct.RootProviderRegistry.getInitialInputs(RootProviderRegistry.java:62)
Mvn Build was successful when I pass values for inputFile and tableSpec as below.
mvn -X compile exec:java \
-Dexec.mainClass=com.ihm.adp.pipeline.template.JsonTextToBqTemplate \
-Dexec.args="--project=xxxxxx-123456 \
--stagingLocation=gs://xxx-test/template/staging/jsontobq/ \
--tempLocation=gs://xxx-test/temp/ \
--templateLocation=gs://xxx-test/template/templates/jsontobq \
--inputFile=gs://xxx-test/input/bqtest.json \
--tableSpec=xxx_test.jsontobq_test \
--errorOutput=gs://xxx-test/template/output"
But it won't create any template in Cloud dataflow.
Is there a way to create a template without validating these runtime arguments during maven execution?
I think the problem here is that you are not specifying a runner. By default, this is attempting to use the DirectRunner. Try to pass
--runner=TemplatingDataflowPipelineRunner
as part of your -Dexec.args. After this you also should not need to specify the ValueProvider template arguments like inputFile, etc.
More info here:
https://cloud.google.com/dataflow/docs/templates/creating-templates
If you are using Dataflow SDK version 1.x, then you need to specify the following arguments:
--runner=TemplatingDataflowPipelineRunner
--dataflowJobFile=gs://xxx-test/template/templates/jsontobq/
If you are using Dataflow SDK version 2.x (Apache Beam), then you need to specify the following arguments:
--runner=DataflowRunner
--templateLocation=gs://xxx-test/template/templates/jsontobq/
It looks like you're using Dataflow SDK version 2.x and not specifying DataflowRunner for the runner argument.
Reference: https://cloud.google.com/dataflow/docs/templates/creating-templates

Flink Avro Parquet Writer in RollingSink

I have an issue when i'm trying to set an AvroParquetWriter in RollingSink,
sink path and writer path seems to be in conflict
flink version : 1.1.3
parquet-avro version : 1.8.1
error :
[...]
12/14/2016 11:19:34 Source: Custom Source -> Sink: Unnamed(8/8) switched to CANCELED
INFO JobManager - Status of job af0880ede809e0d699eb69eb385ca204 (Flink Streaming Job) changed to FAILED.
java.lang.RuntimeException: Could not forward element to next operator
at org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:376)
at org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:358)
at org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:346)
at org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:329)
at org.apache.flink.streaming.api.operators.StreamSource$NonTimestampContext.collect(StreamSource.java:161)
at org.apache.flink.streaming.connectors.kafka.internals.AbstractFetcher.emitRecord(AbstractFetcher.java:225)
at org.apache.flink.streaming.connectors.kafka.internal.Kafka09Fetcher.run(Kafka09Fetcher.java:253)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.fs.FileAlreadyExistsException: File already exists: /home/user/data/file
at org.apache.hadoop.fs.RawLocalFileSystem.create(RawLocalFileSystem.java:264)
at org.apache.hadoop.fs.RawLocalFileSystem.create(RawLocalFileSystem.java:257)
at org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSOutputSummer.<init>(ChecksumFileSystem.java:386)
at org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:447)
at org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:426)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:906)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:887)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:784)
at org.apache.parquet.hadoop.ParquetFileWriter.<init>(ParquetFileWriter.java:223)
at org.apache.parquet.hadoop.ParquetWriter.<init>(ParquetWriter.java:266)
at org.apache.parquet.hadoop.ParquetWriter.<init>(ParquetWriter.java:217)
at org.apache.parquet.hadoop.ParquetWriter.<init>(ParquetWriter.java:183)
at org.apache.parquet.hadoop.ParquetWriter.<init>(ParquetWriter.java:153)
at org.apache.parquet.hadoop.ParquetWriter.<init>(ParquetWriter.java:119)
at org.apache.parquet.hadoop.ParquetWriter.<init>(ParquetWriter.java:92)
at org.apache.parquet.hadoop.ParquetWriter.<init>(ParquetWriter.java:66)
at org.apache.parquet.avro.AvroParquetWriter.<init>(AvroParquetWriter.java:54)
at fr.test.SpecificParquetWriter.open(SpecificParquetWriter.java:28) // line in code => writer = new AvroParquetWriter(new Path("/home/user/data/file"), schema, compressionCodecName, blockSize, pageSize);
at org.apache.flink.streaming.connectors.fs.RollingSink.openNewPartFile(RollingSink.java:451)
at org.apache.flink.streaming.connectors.fs.RollingSink.invoke(RollingSink.java:371)
at org.apache.flink.streaming.api.operators.StreamSink.processElement(StreamSink.java:39)
at org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:373)
... 7 more
INFO JobClientActor - 12/14/2016 11:19:34 Job execution switched to status FAILED.
12/14/2016 11:19:34 Job execution switched to status FAILED.
INFO JobClientActor - Terminate JobClientActor.
[...]
main :
RollingSink sink = new RollingSink<String>("/home/user/data");
sink.setBucketer(new DateTimeBucketer("yyyy/MM/dd"));
sink.setWriter(new SpecificParquetWriter());
stream.addSink(sink);
SpecificParquetWriter :
public class SpecificParquetWriter<V> extends StreamWriterBase<V> {
private transient AvroParquetWriter writer;
private CompressionCodecName compressionCodecName = CompressionCodecName.SNAPPY;
private int blockSize = ParquetWriter.DEFAULT_BLOCK_SIZE;
private int pageSize = ParquetWriter.DEFAULT_PAGE_SIZE;
public static final String USER_SCHEMA = "{"
+ "\"type\":\"record\","
+ "\"name\":\"myrecord\","
+ "\"fields\":["
+ " { \"name\":\"str1\", \"type\":\"string\" },"
+ " { \"name\":\"str2\", \"type\":\"string\" },"
+ " { \"name\":\"int1\", \"type\":\"int\" }"
+ "]}";
public SpecificParquetWriter(){
}
#Override
// workaround
public void open(FileSystem fs, Path path) throws IOException {
super.open(fs, path);
Schema schema = new Schema.Parser().parse(USER_SCHEMA);
writer = new AvroParquetWriter(new Path("/home/user/data/file"), schema, compressionCodecName, blockSize, pageSize);
}
#Override
public void write(Object element) throws IOException {
if(writer != null)
writer.write(element);
}
#Override
public Writer duplicate() {
return new SpecificParquetWriter();
}
}
I don't know if i'm doing it on the right way...
Is there a simple way to do this ?
This is problem with the base class that is Writer in case of RollingSink or StreamBaseWriter in case of Bucketing Sink as they only accept the Writers which can process OutputStream rather than saving them own their own.
writer= new AvroKeyValueWriter<K, V>(keySchema, valueSchema, compressionCodec, streamObject);
whereas AvroParquetWriter or ParquetWriter Accepts filePath
writer = AvroParquetWriter.<V>builder(new Path("filePath"))
.withCompressionCodec(CompressionCodecName.SNAPPY)
.withSchema(schema).build();
I went in deep to understand the ParquetWriter and realized that the stuff we are trying to do , does not make sense as Flink Being an event processing system like storm can't write a single record to a parquet whereas spark streaming can because it works on MicroBatch Principle.
Using Storm with Trident we can still write parquet files, but with FLink we can't until flink introduces something like MicroBatches.
So, for this type of usecase, Spark Streaming is a better choice.
Or go for batch processing if want to use Flink.

XML Parser behave differently on Unix machine for huge/big xml file only. Same code works fine at windows. WHY?

Issue --> Actually I am facing issue with xml parsing (SAX Parser) in Unix Machine. Same Jar/Java-Code behave differently on windows and Unix Machine, why ? :(
Windows Machine --> works fine , Using SAX Parser to load huge xml file , Read all values correctly and populate same values. Charset.defaultCharset() windows-1252
Unix Machine --> After then created JAR and deployed at Unix --> tomcat and execute the jar.
Tried to load same huge xml file But noticed that some values or characters are populated empty or incomplete like
Country Name populated as "ysia" instead of "Malaysia" or transaction Date populate as "3 PM" instead of "18/09/2016 03:31:23 PM". Charset.defaultCharset() UTF-8
Issue is only with Unix , Because when I load same xml at windows or my local eclipse it works fine and all values populate correctly.
Also I tried to modify my code and set encoding as UTF-8 for inputSteamReader but still it's not read value correctly at unix box.
Note : There is no special characters in xml. Also noticed one thing that when I take out same records (those value not populated correctly) in other xml file and load in unix machine with same jar it works fine. It means issues occur while load these records with huge data. :(
Setup Code:
SAXParserFactory saxParserFactory = SAXParserFactory.newInstance();
try {
SAXParser saxParser = saxParserFactory.newSAXParser();
InputStream inputStream= new FileInputStream(inputFilePath);
Reader reader = new InputStreamReader(inputStream,"UTF-8");
InputSource is = new InputSource(reader);
is.setEncoding("UTF-8");
saxParser.parse(is,(DefaultHandler) handler);
} catch(Exception ex){
ex.printStackTrace();
}
Handlers:
public void characters(char[] ac, int i, int j) throws SAXException {
chars.append(ac, i, j);
tmpValue = new String(ac, i, j).trim();
}
public void endElement(String s, String s1, String element) throws SAXException {
if (element.equalsIgnoreCase("transactionDate")) {
obj.setTransactionDate(tmpValue);
}
}
Please suggest , What should be the solution ?
If the current read buffer ends in the middle of an element, you may get two (or more) calls to characters() for the same element -- for instance one with "Mala" and one with "ysia" -- instead of just one call with "Malaysia". In this case, your code overwrites tmpValue containing "Mala" with "ysia". To address this, you need to accumulate the content of multiple calls to characters():
public void startElement(String uri, String localName, String qName,
Attributes attributes) throws SAXException {
if(qName.equalsIgnoreCase("customerName")){
chars.setLength(0);
}
tmpValue = null;
}
public void characters(char[] ac, int i, int j) throws SAXException {
chars.append(ac, i, j);
if (tmpValue == null) {
tmpValue = new String(ac, i, j);
} else {
tmpValue += new String(ac, i, j);
}
}
public void endElement(String s, String s1, String element) throws SAXException {
if (element.equalsIgnoreCase("transactionDate") && tmpValue != null) {
obj.setTransactionDate(tmpValue.trim());
}
}

Code substitution for DSL using ANTLR

The DSL I'm working on allows users to define a 'complete text substitution' variable. When parsing the code, we then need to look up the value of the variable and start parsing again from that code.
The substitution can be very simple (single constants) or entire statements or code blocks.
This is a mock grammar which I hope illustrates my point.
grammar a;
entry
: (set_variable
| print_line)*
;
set_variable
: 'SET' ID '=' STRING_CONSTANT ';'
;
print_line
: 'PRINT' ID ';'
;
STRING_CONSTANT: '\'' ('\'\'' | ~('\''))* '\'' ;
ID: [a-z][a-zA-Z0-9_]* ;
VARIABLE: '&' ID;
BLANK: [ \t\n\r]+ -> channel(HIDDEN) ;
Then the following statements executed consecutively should be valid;
SET foo = 'Hello world!';
PRINT foo;
SET bar = 'foo;'
PRINT &bar // should be interpreted as 'PRINT foo;'
SET baz = 'PRINT foo; PRINT'; // one complete statement and one incomplete statement
&baz foo; // should be interpreted as 'PRINT foo; PRINT foo;'
Any time the & variable token is discovered, we immediately switch to interpreting the value of that variable instead. As above, this can mean that you set up the code in such a way that is is invalid, full of half-statements that are only completed when the value is just right. The variables can be redefined at any point in the text.
Strictly speaking the current language definition doesn't disallow nesting &vars inside each other, but the current parsing doesn't handle this and I would not be upset if it wasn't allowed.
Currently I'm building an interpreter using a visitor, but this one I'm stuck on.
How can I build a lexer/parser/interpreter which will allow me to do this? Thanks for any help!
So I have found one solution to the issue. I think it could be better - as it potentially does a lot of array copying - but at least it works for now.
EDIT: I was wrong before, and my solution would consume ANY & that it found, including those in valid locations such as inside string constants. This seems like a better solution:
First, I extended the InputStream so that it is able to rewrite the input steam when a & is encountered. This unfortunately involves copying the array, which I can maybe resolve in the future:
MacroInputStream.java
package preprocessor;
import org.antlr.v4.runtime.ANTLRInputStream;
public class MacroInputStream extends ANTLRInputStream {
private HashMap<String, String> map;
public MacroInputStream(String s, HashMap<String, String> map) {
super(s);
this.map = map;
}
public void rewrite(int startIndex, int stopIndex, String replaceText) {
int length = stopIndex-startIndex+1;
char[] replData = replaceText.toCharArray();
if (replData.length == length) {
for (int i = 0; i < length; i++) data[startIndex+i] = replData[i];
} else {
char[] newData = new char[data.length+replData.length-length];
System.arraycopy(data, 0, newData, 0, startIndex);
System.arraycopy(replData, 0, newData, startIndex, replData.length);
System.arraycopy(data, stopIndex+1, newData, startIndex+replData.length, data.length-(stopIndex+1));
data = newData;
n = data.length;
}
}
}
Secondly, I extended the Lexer so that when a VARIABLE token is encountered, the rewrite method above is called:
MacroGrammarLexer.java
package language;
import language.DSL_GrammarLexer;
import org.antlr.v4.runtime.Token;
import java.util.HashMap;
public class MacroGrammarLexer extends MacroGrammarLexer{
private HashMap<String, String> map;
public DSL_GrammarLexerPre(MacroInputStream input, HashMap<String, String> map) {
super(input);
this.map = map;
// TODO Auto-generated constructor stub
}
private MacroInputStream getInput() {
return (MacroInputStream) _input;
}
#Override
public Token nextToken() {
Token t = super.nextToken();
if (t.getType() == VARIABLE) {
System.out.println("Encountered token " + t.getText()+" ===> rewriting!!!");
getInput().rewrite(t.getStartIndex(), t.getStopIndex(),
map.get(t.getText().substring(1)));
getInput().seek(t.getStartIndex()); // reset input stream to previous
return super.nextToken();
}
return t;
}
}
Lastly, I modified the generated parser to set the variables at the time of parsing:
DSL_GrammarParser.java
...
...
HashMap<String, String> map; // same map as before, passed as a new argument.
...
...
public final SetContext set() throws RecognitionException {
SetContext _localctx = new SetContext(_ctx, getState());
enterRule(_localctx, 130, RULE_set);
try {
enterOuterAlt(_localctx, 1);
{
String vname = null; String vval = null; // set up variables
setState(1215); match(SET);
setState(1216); vname = variable_name().getText(); // set vname
setState(1217); match(EQUALS);
setState(1218); vval = string_constant().getText(); // set vval
System.out.println("Found SET " + vname +" = " + vval+";");
map.put(vname, vval);
}
}
catch (RecognitionException re) {
_localctx.exception = re;
_errHandler.reportError(this, re);
_errHandler.recover(this, re);
}
finally {
exitRule();
}
return _localctx;
}
...
...
Unfortunately this method is final so this will make maintenance a bit more difficult, but it works for now.
The standard pattern to handling your requirements is to implement a symbol table. The simplest form is as a key:value store. In your visitor, add var declarations as encountered, and read out the values as var references are encountered.
As described, your DSL does not define a scoping requirement on the variables declared. If you do require scoped variables, then use a stack of key:value stores, pushing and popping on scope entry and exit.
See this related StackOverflow answer.
Separately, since your strings may contain commands, you can simply parse the contents as part of your initial parse. That is, expand your grammar with a rule that includes the full set of valid contents:
set_variable
: 'SET' ID '=' stringLiteral ';'
;
stringLiteral:
Quote Quote? (
( set_variable
| print_line
| VARIABLE
| ID
)
| STRING_CONSTANT // redefine without the quotes
)
Quote
;

Database not persisted between builds with xamarin ios

I'm creating a simple sqlite driven app for ios using xamarin studio on a mac.
The sqlite file is created in the "personal" folder and is persisted between builds but when i run the app the tables i created in the previous debug session is gone?
In my code, after checking that the file exists, i connect using a sqliteconnection and create a table and insert a row with the executenonquery method from the command object. While in the same context i can query the table using a second command object but if i stop the debugger and restart the table i gone?
Should i have the file in a different folder, is it a setting in xamarin or ios to keep the tables? Am i unintentionally using temp tables in sqlite or what could be the problem?
Note: so far i'm only using starter version of xamarin and debugging on iphone simulator.
public class BaseHandler
{
private static bool DbIsUpToDate { get; set; }
const int DB_VERSION = 1; //Created DB
const string DB_NAME = "mydb.db3";
protected const string CNN_STRING = "Data Source=" + DB_NAME + ";Version=3";
public BaseHandler ()
{
//No need to validate database more than once on each restart.
if (DbIsUpToDate)
return;
CheckAndCreateDatabase(DB_NAME);
int userVersion = GetUserVersion();
UpdateDBToVersion(userVersion);
DbIsUpToDate = true;
}
int GetUserVersion()
{
int version = 0;
using (var cnn = new SqliteConnection(CNN_STRING))
{
cnn.Open();
using (var cmd = cnn.CreateCommand())
{
cmd.CommandText = "CREATE TABLE UVERSION (VERSION INTEGER);" +
"INSERT INTO UVERSION (VERSION) VALUES(1);";
cmd.ExecuteNonQuery();
}
using (var cmd = cnn.CreateCommand())
{
cmd.CommandText = "SELECT VERSION FROM UVERSION;";
var pragma = cmd.ExecuteScalar();
version = Convert.ToInt32((long)pragma);
}
}
return version;
}
void UpdateDBToVersion(int userVersion)
{
//Prepare the sql statements depending on the users current verion
var sqls = new List<string> ();
if (userVersion < 1)
{
sqls.Add("CREATE TABLE IF NOT EXISTS MYTABLE ("
+ " ID INTEGER PRIMARY KEY, "
+ " NAME TEXT, "
+ " DESC TEXT "
+ ");");
}
//Execute the update statements
using (var cnn = new SqliteConnection(CNN_STRING))
{
cnn.Open();
using (var trans = cnn.BeginTransaction(System.Data.IsolationLevel.ReadCommitted))
{
foreach(string sql in sqls)
{
using (var cmd = cnn.CreateCommand())
{
cmd.CommandText = sql;
cmd.ExecuteNonQuery();
}
}
trans.Commit();
//SetUserVersion(DB_VERSION);
}
}
}
protected string GetDBPath (string dbName)
{
// get a reference to the documents folder
var documents = Environment.GetFolderPath(Environment.SpecialFolder.Personal);
// create the db path
string db = Path.Combine (documents, dbName);
return db;
}
protected void CheckAndCreateDatabase (string dbName)
{
var dbPath = GetDBPath(dbName);
// determine whether or not the database exists
bool dbExists = File.Exists(dbPath);
if (!dbExists)
SqliteConnection.CreateFile(dbPath);
}
}
Again, my problem is that every time I run the debugger it runs GetUserVersion but the table UVERSION is not persisted between sessions. The "File.Exists(dbPath)" returns true so CreateFile is not run. Why is the db empty?
This is a code snippet I've used to save my databases in the iOS simulator and the data seems to persist between app compiles just fine:
string documentsPath = Environment.GetFolderPath (Environment.SpecialFolder.Personal);
string libraryPath = Path.Combine (documentsPath, "../Library/");
var path = Path.Combine (libraryPath, "MyDatabase.db3");
You may also want to check out the SQLite class for Xamarin off of Github:
https://github.com/praeclarum/sqlite-net/tree/master/src
Here's a tutorial on how to use said class:
http://docs.xamarin.com/recipes/ios/data/sqlite/create_a_database_with_sqlitenet
Turns out that I was creating the connection object using the CNN_STRING which just had the db-name instead of the full path to the db. Apperantly the connection object creates the database if the file doesn't exist so the File.Exists(...) might not be needed. I'm not really sure if it should be a temporary db if the complete path is not supplied but it seems to be the case. Changing the creation of the connection object to "datasource=" solved the problem.

Resources