I set up CloudCode for my app using the instructions found in the Parse documentation. Unfortunately I gave it a poor name and it is placed in an undesirable location. I would like to remove it and recreate it.
I have created a new CloudCode and moved the code into that. But I did create a Scheduled Job that uses a function in the CloudCode from the old location. Is there a special process I need to go through in order to get rid of the old CloudCode, or can I simply trash that directory and run parse deploy with the new CloudCode?
You can trash the directory and run parse deploy. Parse's CLI overwrites the entire directory structure remotely using the local copy. Your instincts here are correct.
Related
The problem:
I'm using bitbucket stash (server) API in a script for my project with the {path} api method:
/rest/api/1.0/projects/{projectKey}/repos/{repositorySlug}/browse/{path:.*}
The idea was to save versions of config files in a repository (version01-versionXX for every config). But those configs have the same structure with different names and parameters,
so when I push a new config with a commit message like 'version01' without specifying any sourceCommitId, bitbucket automatically adds a parent commit from the last file with the same structure (if it exists). As a result, in this new file's history I'm getting several 'version01' commits, which is not what I was intended to have.
What I've tried:
If I do specify sourceCommitId as the initial or the last commit on the branch, I get an error message since the file doesn't exist on this commit.
I've tried to experiment with empty sourceBranch parameter, but still some parent commit appears.
The only idea I came up with is to create a new branch for every config, but this seems like overkill to me.
All attempts to find a method for editing file commit history via API also failed.
At the moment as a work around I create every config file with its name as the only line of its content and then change it to the structure I need. This works so far, but doesn't look like a good solution to me and requires 2 API requests instead of one.
Is there a better way to prevent BitBucket from treating those new files as copies of old ones?
Is it possible to somehow serialize current Thingsboard (let's call it TBoard) configuration, save it and than latter load saved configuration on TBoard startup.
I am specifically interested in loading device profiles, rule chains, and dashboards.
I want to save configuration together with my project in git repository so than latter I could just use docker-compose to start multiple services from project (let's call them sensors) and single TBoard instance with saved configuration which will be used for collecting telemetry from sensors and drawing dashboards.
Another reason for saving configuration is what happens if for some reason TBoard container crashes or somehow get corrupted so it can't be started again, would I have to click on the things again in order to create all device profiles, dashboards, configure rule chains ... etc etc ... ?
Regarding this line
I am specifically interested in loading device profiles, rule chains, and dashboards. I want to save configuration together with my project in git repository
I have just recently implemented version control for my Thingsboard deployment. The way i am doing it is with the python REST client.
I have written functions to export all dashboards/data converters/integrations/rule chains/widgets into json files which I save into a github repository.
I have also written the reverse script to push the stored files to a fresh environment, essentially "flashing" it. Surprisingly, this works perfectly.
I have an idea to publish this as a package, but it's something I've never done before so I'm unsure if I will get to it.
Just letting you know that it is definitely possible to get source control operational via the API.
I need a way to create customized stanzas and have mongooseIM recognize them and store the data accordingly inside of a given database such as mysql for later retrieval.
The reason I want to do this is because an app that I am building has a chat that requires complex querying based on sub objects'parameters. Also anything a user does inside of the app but outside of the chat like change the title of the group chat or like a post, it is logged inside of the chat as a log message with the given postId and userId.
So ideally I want it to do something like this:
<postId>1</postId> //So that I can query by post id
<description>Hello</description> //Data for clients to update real time
<userId>1</userId> //also want to be able to query the db by this.
all these variables should be saved into the database that is provided for MAM inside of the MongooseIM
You need to rite your custom mod in erlang. Here is how you can start on this :
https://mongooseim.readthedocs.io/en/latest/user-guide/Getting-started/
Build and install from source code
To build and install MongooseIM from source code, do the following:
Clone the Git repository:
git clone https://github.com/esl/MongooseIM.git
Go to your MongooseIM directory.
Run the following command: make rel.
In the code You will see apps/ejabberd/src
Write you mode these and compile to get bin file and move the bin files to the release.
let's say I am developing 2 applications for 2 different clients which are, using 2 different database-configurations.
When using Openshift and CakePHP it is considered good practise to not store the real connection-info in the configs, but instead to use environment-variables.
That way the GIT-Repo is also always clean of server-specific stuff.
That is all fine as long as I have ONE project but as soon as there is another one, I need to override my local env-vars according to the current project.
Is there any way to avoid this? Is it possible to set up env-vars on my local machine per directory or something like that?
I am running OSX with Mamp Pro.
This may not be the best solution, but It would work. Create a different user on your local machine and then change to that other user when you need to access those other environment variables.
I create a 'data' directory in my git repo and set it to ignore. This way anything in there will be saved in the repo and sent to openshift. I place a config.ini file with all the info that I don't want in my repo.
I then manually put that config.ini file in Openshift's persistent DATA directory by using winSCP. You only have to do this when you change your config.ini.
When my app runs it detects if it's local or on Openshift and loads the config.ini file from the correct directory.
I would be interested if somebody has a better idea.
HTH
I've got a need to checkout an entire source tree out of one server and check it into another server. I'm attempting to script this into a final builder script, but am running into some snags. I'm able to check everything out, but when I attempt to check it into the new server it tells me there are no pending changes. Obviously I'm missing something if this is even possible.
Anyone done something similar to this or know of a way I might accomplish this?
One more thing, if the src is empty on server 2 would I have to manually add the files before I can update them?
I would guess that the reason that TFS is saying no pending changes is that you haven't checked out the files from Server 2. This could get kind of ugly using a single directory, so I would recommend trying this:
Get (latest or specific version) from server 1 to
C:\Server1Files...
Get and Check out for edit everything from server 2 to
C:\Server2Files...
Copy from C:\Server1iles1\ to C:\Server2Files
Check in from C:\Server2Files
I think TFS is going to complain if you try to use a single directory here, as it would see the same directory mapped to two different workspaces (even though they're on different instances of TFS).