What are the step by step actions needed to upgrade a holochain rust back end from 0.0.1 to 0.0.2? - holochain

I started by reviewing the Api notes and comparing them:
https://developer.holochain.org/api/
What I have done so far:
Preparation:
Downloaded and installed 0.0.2, and then updated the bash_profile following this link:
https://developer.holochain.org/start.html
JSON PARSE/Stringify update
Updated all of the tests to remove any JSON.parse and JSON.stringify calls as they are no longer needed, for example replacing this:
JSON.stringify({})
with this:
{}
Derive function update
Updated all derive functions in zome definition files ( lib.rs ) to include Debug and DefaultJSON, like this:
#[derive(Serialize, Deserialize, Debug, DefaultJson)]
Json String update
Did a global find and replace for all zome files on the JsonString
changing the serde_json call to look like this :
replacing
-> serde_json::Value
with
-> JsonString
so it looks like this:
fn handle_create_action(action: Action, user_address: HashString) ->
JsonString { ...
Current errors
I am running to these errors:
error: cannot find derive macro DefaultJson in this scope
error[E0412]: cannot find type JsonString in this scope
how can we import these into the lib.rs files?
Update
This is by no means a comprehensive answer, but here are some of the additional steps I have found with help.
You will also need to edit the cargo.toml file of each zome, the dependencies part, to look like this:
serde = "1.0"
serde_json = "1.0"
serde_derive = "1.0"
hdk = { git = "https://github.com/holochain/holochain-rust", branch = "master" }
holochain_core_types = { git = "https://github.com/holochain/holochain-rust", branch = "master" }
holochain_core_types_derive = { git = "https://github.com/holochain/holochain-rust", branch = "master" }
holochain_wasm_utils = { git = "https://github.com/holochain/holochain-rust", branch = "master" }
This was found with the specification app which is already up to date with the release that happened last night, at this page:
https://github.com/holochain/dev-camp-tests-rust/blob/master/zomes/people/code/Cargo.toml
Each zome needed this as a replacement for everything above the #derive function:
#![feature(try_from)]
#[macro_use]
extern crate hdk;
extern crate serde;
#[macro_use]
extern crate serde_derive;
#[macro_use]
extern crate serde_json;
extern crate holochain_core_types;
#[macro_use]
extern crate holochain_core_types_derive;
use hdk::{
holochain_core_types::{
dna::zome::entry_types::Sharing,
hash::HashString,
json::JsonString,
entry::Entry,
entry::entry_type::EntryType,
error::HolochainError,
cas::content::Address,
},
};
This resolved the initial errors on compile, and showed me the next layer of changes needed via terminal feedback when I ran hc test to compile, build and test the app... this is what I am seeing now..
Error 1
error[E0061]: this function takes 1 parameter but 2 parameters were supplied
--> src/lib.rs:56:11
|
56 | match hdk::commit_entry("metric", json!(metric)) {
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ expected 1 parameter
Error 2
error[E0308]: mismatched types
--> src/lib.rs:60:24
|
60 | return json!({"link error": link_result.err().unwrap()});
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ expected struct `holochain_core_types::json::JsonString`, found enum `serde_json::Value`
I will attempt to resolve this one by replacing the serde_json calls in the zome code with JsonString...
Error 3
error[E0609]: no field `links` on type `hdk::holochain_wasm_utils::api_serialization::get_links::GetLinksResult`
--> src/lib.rs:82:18
|
82 | .links
| ^^^^^ unknown field
Error 4
error[E0599]: no method named `to_json` found for type `hdk::error::ZomeApiError` in the current scope
--> src/lib.rs:97:32
|
97 | "error": hdk_error.to_json()
| ^^^^^^^
Update 2
#connorturlands answer got me through most of those errors, and now there appears to be just one more.
^^^^^^^^
|
= note: #[warn(unused_imports)] on by default
error[E0063]: missing fields `active`, `date_time`, `description` and 12 other fields in initializer of `Metric`
--> src/lib.rs:48:68
|
48 | let metric_entry = Entry::new(EntryType::App("metric".into()), Metric{
| ^^^^^^ missing `active`, `date_time`, `description` and 12 other fields
error: aborting due to previous error
For more information about this error, try `rustc --explain E0063`.
error: Could not compile `metrics`.
Which is in response to this zome definition:
fn handle_create_metric(metric: Metric, user_address: HashString) -> JsonString {
let metric_entry = Entry::new(EntryType::App("metric".into()), Metric{
// => Here is where the error triggers... it wants me to define 'title, time, etc' but as a core function, I don't see the point, those will be inputted.. not sure how to address this
});
match hdk::commit_entry(&metric_entry) {
Ok(metric_address) => {
match hdk::link_entries(
&user_address,
&metric_address,
"metric_by_user"
) {
Ok(link_address) => metric_address.into(),
Err(e) => e.into(),
}
}
Err(hdk_error) => hdk_error.into(),
}
}

For error 1, just check this example, and copy it:
https://developer.holochain.org/api/0.0.2/hdk/api/fn.commit_entry.html
For error 2, just do
link_result.into()
which converts it into a JsonString
For error 3, use
.addresses()
instead of .links, this can be seen here: https://developer.holochain.org/api/0.0.2/holochain_wasm_utils/api_serialization/get_links/struct.GetLinksResult.html
And for error 4 just do
hdk_error.into()
and remove it from the json! wrapping that it looks like you're attempting :)
In general, if you see a reference to something relating to the hdk, use the search feature of the API ref to find out more about it, its very good

Migrating from 0.0.1 to 0.0.2 was exactly what I did recently for the todo-list example. I just created a branch for the old version so you can compare the two
https://github.com/willemolding/holochain-rust-todo
https://github.com/willemolding/holochain-rust-todo/tree/working-on-v0.0.1
From memory some of the gotchas are:
commit_entry now takes a single reference to an Entry object
Links must be included as part of the define_zome! or they cannot be created
move from serde_json Value to JsonString
Need to include holochain_core_types_derive = { git = "https://github.com/holochain/holochain-rust" , tag = "holochain-cmd-v0.0.2" } in the cargo.toml
Responses from get_entry are generic entry types and can be converted into a local type, say ListEntry, as ListEntry::try_from(entry.value())
There are a whole lot more so its probably best to check out the repo.

Related

How is vendorSha256 computed?

I'm trying to understand how the vendorSha256 is calculated when using buildGoModule. In nixpkgs manual the only info I get is:
"vendorSha256: is the hash of the output of the intermediate fetcher derivation."
Is there a way I can calculate the vendorSha256 for a nix expression I'm writing? To take a specific example, how was the "sha256-Y4WM+o+5jiwj8/99UyNHLpBNbtJkKteIGW2P1Jd9L6M=" generated here:
{ lib, buildGoModule, fetchFromGitHub }:
buildGoModule rec {
pname = "oapi-codegen";
version = "1.6.0";
src = fetchFromGitHub {
owner = "deepmap";
repo = pname;
rev = "v${version}";
sha256 = "sha256-doJ1ceuJ/gL9vlGgV/hKIJeAErAseH0dtHKJX2z7pV0=";
};
vendorSha256 = "sha256-Y4WM+o+5jiwj8/99UyNHLpBNbtJkKteIGW2P1Jd9L6M=";
# Tests use network
doCheck = false;
meta = with lib; {
description = "Go client and server OpenAPI 3 generator";
homepage = "https://github.com/deepmap/oapi-codegen";
license = licenses.asl20;
maintainers = [ maintainers.j4m3s ];
};
}
From the manual:
The function buildGoModule builds Go programs managed with Go modules.
It builds a Go Modules through a two phase build:
An intermediate fetcher derivation. This derivation will be used to fetch all of the dependencies of the Go module.
A final derivation will use the output of the intermediate derivation to build the binaries and produce the final output.
You can see that, when you're trying to build the above expression in the ouput of nix-build. If you run:
nix-build -E 'with import <nixpkgs> { }; callPackage ./yourExpression.nix { }'
you see the first 2 lines of output:
these 2 derivations will be built:
/nix/store/j13s3dvlwz5w9xl5wbhkcs7lrkgksv3l-oapi-codegen-1.6.0-go-modules.drv
/nix/store/4wyj1d9f2m0521nlkjgr6al0wfz12yjn-oapi-codegen-1.6.0.drv
The first derivation will be used to fetch all dependencies for your Go module, and the second will be used to build your actual module. So vendorSha256 is the hash of the output of that first derivation.
When you write a Nix expression to build a Go module you don't know in advance that hash. You only know it that the first derivation has been realised(download dependencies and find the hash based on them). However you can use Nix validation to find out the value of vendorSha256.
Modify your Nix expression like so:
{ lib, buildGoModule, fetchFromGitHub }:
buildGoModule rec {
pname = "oapi-codegen";
version = "1.6.0";
src = fetchFromGitHub {
owner = "deepmap";
repo = pname;
rev = "v${version}";
sha256 = "sha256-doJ1ceuJ/gL9vlGgV/hKIJeAErAseH0dtHKJX2z7pV0=";
};
vendorSha256 = lib.fakeSha256;
# Tests use network
doCheck = false;
meta = with lib; {
description = "Go client and server OpenAPI 3 generator";
homepage = "https://github.com/deepmap/oapi-codegen";
license = licenses.asl20;
maintainers = [ maintainers.j4m3s ];
};
}
The only difference is vendorSha256 has now the value of lib.fakeSha256, which is just a fake/wrong sha256 hash. Nix will try to build the first derivation and will check the hash of the dependencies against this value. Since they will not match, an error will occur:
error: hash mismatch in fixed-output derivation '/nix/store/j13s3dvlwz5w9xl5wbhkcs7lrkgksv3l-oapi-codegen-1.6.0-go-modules.drv':
specified: sha256-AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA=
got: sha256-Y4WM+o+5jiwj8/99UyNHLpBNbtJkKteIGW2P1Jd9L6M=
error: 1 dependencies of derivation '/nix/store/4wyj1d9f2m0521nlkjgr6al0wfz12yjn-oapi-codegen-1.6.0.drv' failed to build
So this answer your question. The value of vendorSha256 you need is sha256-Y4WM+o+5jiwj8/99UyNHLpBNbtJkKteIGW2P1Jd9L6M=. Copy and add it your file and you're good to go!

F# interactive script load management

Suppose I have the following scripts:
C.fsx
namespace FruitSalad
type Cherry =
{
Cherry : int
}
B.fsx
#load "./C.fsx"
namespace FruitSalad
type Banana =
{
Banana : int
Cherry : Cherry
}
A.fsx
#load "B.fsx"
#load "C.fsx"
open FruitSalad
type Apple =
{
Apple : int
Banana : Banana
Cherry : Cherry
}
let c =
{
Cherry = 3
}
let a =
{
Apple = 1
Banana =
{
Banana = 2
Cherry = c
}
Cherry = c
}
printfn "%A" a
Running A.fsx gives this error:
error FS0001: This expression was expected to have type
'FSI_0001.FruitSalad.Cherry'
but here has type
'FSI_0002.FruitSalad.Cherry'
I can fix this by removing #load "C.fsx" from A.fsx.
However, maintaining this is awkward for much larger script projects.
Does F# support something like "include once" so that this is not necessary?
I agree with the comment by Fyodor that having a project might be easier. Note that you can create a project with a number of *.fs files in it, but still also have a *.fsx file that loads all the files from the project. This way, the IDE will read information from the project (and all your autocomplete will work), but you will also be able to run everything from a script file, which just needs to #load all the *.fs files in the correct order.
Another option that might help you if you prefer to stick to scripts is to use #load with multiple files at the same time. In the case above, you should be able to do:
#load "./B.fsx" "./C.fsx"
This way, F# interactive will know that you are loading these two files at the same time and will not create a separate version of C.fsx while loading B.fsx.

Conditionally create a Bazel rule based on --config

I'm working on a problem in which I only want to create a particular rule if a certain Bazel config has been specified (via '--config'). We have been using Bazel since 0.11 and have a bunch of build infrastructure that works around former limitations in Bazel. I am incrementally porting us up to newer versions. One of the features that was missing was compiler transitions, and so we rolled our own using configs and some external scripts.
My first attempt at solving my problem looks like this:
load("#rules_cc//cc:defs.bzl", "cc_library")
# use this with a select to pick targets to include/exclude based on config
# see __build_if_role for an example
def noop_impl(ctx):
pass
noop = rule(
implementation = noop_impl,
attrs = {
"deps": attr.label_list(),
},
)
def __sanitize(config):
if len(config) > 2 and config[:2] == "//":
config = config[2:]
return config.replace(":", "_").replace("/", "_")
def build_if_config(**kwargs):
config = kwargs['config']
kwargs.pop('config')
name = kwargs['name'] + '_' + __sanitize(config)
binary_target_name = kwargs['name']
kwargs['name'] = binary_target_name
cc_library(**kwargs)
noop(
name = name,
deps = select({
config: [ binary_target_name ],
"//conditions:default": [],
})
)
This almost gets me there, but the problem is that if I want to build a library as an output, then it becomes an intermediate dependency, and therefore gets deleted or never built.
For example, if I do this:
build_if_config(
name="some_lib",
srcs=[ "foo.c" ],
config="//:my_config",
)
and then I run
bazel build --config my_config //:some_lib
Then libsome_lib.a does not make it to bazel-out, although if I define it using cc_library, then it does.
Is there a way that I can just create the appropriate rule directly in the macro instead of creating a noop rule and using a select? Or another mechanism?
Thanks in advance for your help!
As I noted in my comment, I was misunderstanding how Bazel figures out its dependencies. The create a file section of The Rules Tutorial explains some of the details, and I followed along here for some of my solution.
Basically, the problem was not that the built files were not sticking around, it was that they were never getting built. Bazel did not know to look in the deps variable and build those things: it seems I had to create an action which uses the deps, and then register an action by returning a (list of) DefaultInfo
Below is my new noop_impl function
def noop_impl(ctx):
if len(ctx.attr.deps) == 0:
return None
# ctx.attr has the attributes of this rule
dep = ctx.attr.deps[0]
# DefaultInfo is apparently some sort of globally available
# class that can be used to index Target objects
infile = dep[DefaultInfo].files.to_list()[0]
outfile = ctx.actions.declare_file('lib' + ctx.label.name + '.a')
ctx.actions.run_shell(
inputs = [infile],
outputs = [outfile],
command = "cp %s %s" % (infile.path, outfile.path),
)
# we can also instantiate a DefaultInfo to indicate what output
# we provide
return [DefaultInfo(files = depset([outfile]))]

How can I build custom rules using the output of workspace_status_command?

The bazel build flag --workspace_status_command supports calling a script to retrieve e.g. repository metadata, this is also known as build stamping and available in rules like java_binary.
I'd like to create a custom rule using this metadata.
I want to use this for a common support function. It should receive the git version and some other attributes and create a version.go output file usable as a dependency.
So I started a journey looking at rules in various bazel repositories.
Rules like rules_docker support stamping with stamp in container_image and let you reference the status output in attributes.
rules_go supports it in the x_defs attribute of go_binary.
This would be ideal for my purpose and I dug in...
It looks like I can get what I want with ctx.actions.expand_template using the entries in ctx.info_file or ctx.version_file as a dictionary for substitutions. But I didn't figure out how to get a dictionary of those files. And those two files seem to be "unofficial", they are not part of the ctx documentation.
Building on what I found out already: How do I get a dict based on the status command output?
If that's not possible, what is the shortest/simplest way to access workspace_status_command output from custom rules?
I've been exactly where you are and I ended up following the path you've started exploring. I generate a JSON description that also includes information collected from git to package with the result and I ended up doing something like this:
def _build_mft_impl(ctx):
args = ctx.actions.args()
args.add('-f')
args.add(ctx.info_file)
args.add('-i')
args.add(ctx.files.src)
args.add('-o')
args.add(ctx.outputs.out)
ctx.actions.run(
outputs = [ctx.outputs.out],
inputs = ctx.files.src + [ctx.info_file],
arguments = [args],
progress_message = "Generating manifest: " + ctx.label.name,
executable = ctx.executable._expand_template,
)
def _get_mft_outputs(src):
return {"out": src.name[:-len(".tmpl")]}
build_manifest = rule(
implementation = _build_mft_impl,
attrs = {
"src": attr.label(mandatory=True,
allow_single_file=[".json.tmpl", ".json_tmpl"]),
"_expand_template": attr.label(default=Label("//:expand_template"),
executable=True,
cfg="host"),
},
outputs = _get_mft_outputs,
)
//:expand_template is a label in my case pointing to a py_binary performing the transformation itself. I'd be happy to learn about a better (more native, fewer hops) way of doing this, but (for now) I went with: it works. Few comments on the approach and your concerns:
AFAIK you cannot read in (the file and perform operations in Skylark) itself...
...speaking of which, it's probably not a bad thing to keep the transformation (tool) and build description (bazel) separate anyways.
It could be debated what constitutes the official documentation, but ctx.info_file may not appear in the reference manual, it is documented in the source tree. :) Which is case for other areas as well (and I hope that is not because those interfaces are considered not committed too yet).
For sake of comleteness in src/main/java/com/google/devtools/build/lib/skylarkbuildapi/SkylarkRuleContextApi.java there is:
#SkylarkCallable(
name = "info_file",
structField = true,
documented = false,
doc =
"Returns the file that is used to hold the non-volatile workspace status for the "
+ "current build request."
)
public FileApi getStableWorkspaceStatus() throws InterruptedException, EvalException;
EDIT: few extra details as asked in the comment.
In my workspace_status.sh I would have for instance the following line:
echo STABLE_GIT_REF $(git log -1 --pretty=format:%H)
In my .json.tmpl file I would then have:
"ref": "${STABLE_GIT_REF}",
I've opted for shell like notation of text to be replaced, since it's intuitive for many users as well as easy to match.
As for the replacement, relevant (CLI kept out of this) portion of the actual code would be:
def get_map(val_file):
"""
Return dictionary of key/value pairs from ``val_file`.
"""
value_map = {}
for line in val_file:
(key, value) = line.split(' ', 1)
value_map.update(((key, value.rstrip('\n')),))
return value_map
def expand_template(val_file, in_file, out_file):
"""
Read each line from ``in_file`` and write it to ``out_file`` replacing all
${KEY} references with values from ``val_file``.
"""
def _substitue_variable(mobj):
return value_map[mobj.group('var')]
re_pat = re.compile(r'\${(?P<var>[^} ]+)}')
value_map = get_map(val_file)
for line in in_file:
out_file.write(re_pat.subn(_substitue_variable, line)[0])
EDIT2: This is how the Python script is how I expose the python script to rest of bazel.
py_binary(
name = "expand_template",
main = "expand_template.py",
srcs = ["expand_template.py"],
visibility = ["//visibility:public"],
)
Building on Ondrej's answer, I now use somthing like this (adapted in SO editor, might contain small errors):
tools/bazel.rc:
build --workspace_status_command=tools/workspace_status.sh
tools/workspace_status.sh:
echo STABLE_GIT_REV $(git rev-parse HEAD)
version.bzl:
_VERSION_TEMPLATE_SH = """
set -e -u -o pipefail
while read line; do
export "${line% *}"="${line#* }"
done <"$INFILE" \
&& cat <<EOF >"$OUTFILE"
{ "ref": "${STABLE_GIT_REF}"
, "service": "${SERVICE_NAME}"
}
EOF
"""
def _commit_info_impl(ctx):
ctx.actions.run_shell(
outputs = [ctx.outputs.outfile],
inputs = [ctx.info_file],
progress_message = "Generating version file: " + ctx.label.name,
command = _VERSION_TEMPLATE_SH,
env = {
'INFILE': ctx.info_file.path,
'OUTFILE': ctx.outputs.version_go.path,
'SERVICE_NAME': ctx.attr.service,
},
)
commit_info = rule(
implementation = _commit_info_impl,
attrs = {
'service': attr.string(
mandatory = True,
doc = 'name of versioned service',
),
},
outputs = {
'outfile': 'manifest.json',
},
)

grails changelog preconditions not doing anything

I am trying to make changes to the database using the changelog. Since I cannot guarantee that the values currently exist for the specific code, but could exist, I need to be able to check for them in order to either do an insert or an update.
Here is what I have been testing, which doesn't appear to do anything. Any words of advice are welcome.
databaseChangeLog = {
changeSet(author:'kmert', id:'tubecap-insert-update-1') {
preConditions(onFail="WARN",onFailMessage:"Tube cap does not exist,skipping because it cannot be updated."){
sqlCheck(expectedResult='1', 'SELECT * FROM [ltc2_tube_cap] WHERE code=11')
}
grailsChange {
change {
sql.execute("""
UPDATE [ltc2_tube_cap]
SET [name] = 'White'
WHERE [code] = 11;
""")
}
rollback {
}
}
}
}
UPDATE: I got the changelog script running, but I am now getting this error. I found the code from an online source. I cannot find a lot of documentation on preconditions...
| Starting dbm-update for database hapi_app_user # jdbc:jtds:sqlserver://localhost;databaseName=LabTraffic;MVCC=TRUE;LOCK_TIMEOUT=10000
problem parsing TubeCapUpdate.groovy: No signature of method: grails.plugin.databasemigration.DslBuilder.sqlCheck() is applicable for argument types: (java.lang.String, java.lang.String) values: [1, SELECT * FROM ltc2_tube_cap WHERE code=11] (re-run with --verbose to see the stacktrace)
problem parsing changelog.groovy: No signature of method: grails.plugin.databasemigration.DslBuilder.sqlCheck() is applicable for argument types: (java.lang.String, java.lang.String) values: [1, SELECT * FROM ltc2_tube_cap WHERE code=11] (re-run with --verbose to see the stacktrace)
groovy.lang.MissingMethodException: No signature of method: grails.plugin.databasemigration.DslBuilder.sqlCheck() is applicable for argument types: (java.lang.String, java.lang.String) values: [1, SELECT * FROM ltc2_tube_cap WHERE code=11]
at grails.plugin.databasemigration.DslBuilder.invokeMethod(DslBuilder.groovy:117)
at Script1$_run_closure1_closure2_closure3.doCall(Script1.groovy:13)
at grails.plugin.databasemigration.DslBuilder.invokeMethod(DslBuilder.groovy:117)
at Script1$_run_closure1_closure2.doCall(Script1.groovy:12)
at grails.plugin.databasemigration.DslBuilder.invokeMethod(DslBuilder.groovy:117)
at Script1$_run_closure1.doCall(Script1.groovy:11)
at grails.plugin.databasemigration.GrailsChangeLogParser.parse(GrailsChangeLogParser.groovy:84)
at grails.plugin.databasemigration.DslBuilder.handleIncludedChangeLog(DslBuilder.groovy:747)
at grails.plugin.databasemigration.DslBuilder.createNode(DslBuilder.groovy:139)
at grails.plugin.databasemigration.DslBuilder.createNode(DslBuilder.groovy:590)
at grails.plugin.databasemigration.DslBuilder.invokeMethod(DslBuilder.groovy:117)
at Script1$_run_closure1.doCall(Script1.groovy:6)
at grails.plugin.databasemigration.GrailsChangeLogParser.parse(GrailsChangeLogParser.groovy:84)
at liquibase.Liquibase.update(Liquibase.java:107)
at DbmUpdate$_run_closure1_closure2.doCall(DbmUpdate:26)
at _DatabaseMigrationCommon_groovy$_run_closure2_closure11.doCall(_DatabaseMigrationCommon_groovy:59)
at grails.plugin.databasemigration.MigrationUtils.executeInSession(MigrationUtils.groovy:133)
at _DatabaseMigrationCommon_groovy$_run_closure2.doCall(_DatabaseMigrationCommon_groovy:51)
at DbmUpdate$_run_closure1.doCall(DbmUpdate:25)
Your syntax is incorrect for the sqlCheck preCondition.
sqlCheck(expectedResult:'1', 'SELECT * FROM [ltc2_tube_cap] WHERE code=11')
Notice in your code the first argument is an assignment statement expectedResult=1 and it should be a map entry expectedResult:1.
I found the answer buried on this Jira page. https://jira.grails.org/browse/GPDATABASEMIGRATION-40 which ironically is about adding lots of examples to the DB migration DSL to the documentation.
Make sure the following
grails dbm-gorm-diff add-your-file-forupdate.groovy -add
then inside your-file-forupdate.groovy is expected to see
databaseChangeLog = {
changeSet(author:'kmert', id:'tubecap-insert-update-1') {
.
.
.
}
}
Then ,the Big deal is either you have included this as migration script file to be executed as follow:
just manually add a line like the following to the end of grails-app/migrations/changelog.groovy:
include file: 'your-file-forupdate.groovy'
The changelog.groovy is always run from beginning to end, so make sure that you always add newly created migrations to the end.
Cheers! for more info see this

Resources