Dynamic markers for Metaio - augmented-reality

I would like to have several logos in my application that are recognized as markers and trigger some sort of event, unique per logo. From time to time I want to update these logos/markers. I don't want to use metaio cloud, I want to somehow make the app call a webservice from my server application and download new markers/logos.
Is this possible? can you point me in the right direction as far as data formats etc..

It should still be possible to draw content from your own private FTP. Something along these lines:
https://dev.metaio.com/creator/advancedmore-information/connect-your-own-ftp/

yes,it is possible
1)by webservice u can download images(give same name to images which u define in your assets,on webserver)
-and download this images in your assets folder
like
File SDCardRoot = context.getFilesDir().getPath() + "/Assets1";
and in main activity write like this
AssetsManager.extractAllAssets(getApplicationContext(), false);
make flag false so previous images not overloaded,and it will keep ur download images
2)after download complete u need again to load trackingconfig

Related

Deactivate read-only & non-realtime mode in firebase [duplicate]

Read-only & non-realtime mode activated to improve browser performance
Message pops up in my project and I'm unable to delete the nodes as well
Also I read this https://groups.google.com/forum/#!topic/firebase-talk/qLxZCI8i47s
Which states :
If you have a lot of nodes in your Firebase (say thousands), we need to create a new element for each node and modern browsers simply have limitations of how many DOM elements you can add to a page
It says:
To resolve this problem, don't load your Firebase Dashboard at the root of your Firebase, but instead load it lower down in the hierarchy
I do not get what it means
How do I get back to my Realtime Dashboard?
If you want to delete a high level node when this is activated, I recommend doing this.
Open up a text editor and type in { }. Save this file as "blankJSON.json".
Go to high level node you want deleted and select it, once it opens up and shows you all the nodes that need to be removed, select the three bars at the top right and select "Import JSON", (It would be safe to first "Export JSON" If you don't have backups, in case you make a mistake here). Import the JSON file we created earlier titled "blankJSON".
This will delete all of the data inside.
Once again, I highly suggest you make a backup before doing this, It's extremely easy to make a backup and also it is much easier than you would think to upload this blankJSON to the wrong node and then erasing a bunch of important data.
When it detects that it's downloading too many nodes from your database, the Firebase Console stops using real-time mode and switches to read-only mode. In this mode it requires less work from the browser, so it is more likely that the browser will stay performant.
To get back to realtime mode, you have to go to a location that has fewer nodes. So say that you start loading the database at the root, that means that "pseudo address bar" at the top of the data tree will say:
https://<your-project>.firebaseio.com/
And then will show the list of items. Now click on the URL in that pseudo address bar and change it to:
https://<your-project>.firebaseio.com/<one-of-your-keys>
And hit enter. The data tree will reload with just the node from one-of-your-keys and down and will likely switch to realtime mode again.
Every node key in firebase is a link, you can open a sub-node in a new tab and then edit that sub-node and its children.
Right click on a sub-node you want to edit or delete
Select open link in a new tab
Edit the sub-node in the new tab
1) Click on the Node you want to mass delete
2) Import an empty .json file (just containing curly braces, {} )
3) The node value will be set to null, in other words it is deleted or rather overridden with an empty node!
What you can do is to have an OnClickListener and call the remove value method to your DatabaseReference, like this:
mCart.setOnClickListener(new View.OnClickListener() {
#Override
public void onClick(View view) {
mDatabaseReference.removeValue();
}
});
I have the same problem... I'm a bit surprised because I though Firebase could easily scale to support huge amount of data (example million users, etc.).
I have a node with 80000 sub-nodes (each object has his own push-id) and I cannot delete or perform any action on it because the real-time doesn't work in Firebase console.
I think the only way to udate or delete the data it's to do it via JAVA code :(
Multiple times trying to load the specific keys can be tiresome. There is a python library that could do this for you easily.
http://ozgur.github.io/python-firebase/
I needed to delete a lot of keys and this helped me do that in one go.
What I do is export the entire tree, edit/add the node I want using an editor, then import the JSON and overwrite the previous node/tree. Problem solved! Risky though 😁

How can I preserve storage space and load time with Active Storage?

I have a user submission form that includes images. Originally I was using Carrierwave, but with that the image is sent to my server for processing first before being saved to Google Cloud Services, and if the image/s is/are too large, the request times out and the user just gets a server error.
So what I need is a way to upload directly to GCS. Active Storage seemed like the perfect solution, but I'm getting really confused about how hard compression seems to be.
An ideal solution would be to resize the image automatically upon upload, but there doesn't seem to be a way to do that.
A next-best solution would be to create a resized variant upon upload using something like #record.images.first.variant(resize_to_limit [xxx,xxx]) #using image_processing gem, but the docs seem to imply that a variant can only be created upon page load, which would obviously be extremely detrimental to load time, especially if there are many images. More evidence for this is that when I create a variant, it's not in my GCS bucket, so it clearly only exists in my server's memory. If I try
#record.images.first.variant(resize_to_limit [xxx,xxx]).service_url
I get a url back, but it's invalid. I get a failed image when I try to display the image on my site, and when I visit the url, I get these errors from GCS:
The specified key does not exist.
No such object.
so apparently I can't create a permanent url.
A third best solution would be to write a Google Cloud Function that automatically resizes the images inside Google Cloud, but reading through the docs, it appears that I would have to create a new resized file with a new url, and I'm not sure how I could replace the original url with the new one in my database.
To summarize, what I'd like to accomplish is to allow direct upload to GCS, but control the size of the files before they are downloaded by the user. My problems with Active Storage are that (1) I can't control the size of the files on my GCS bucket, leading to arbitrary storage costs, and (2) I apparently have to choose between users having to download arbitrarily large files, or having to process images while their page loads, both of which will be very expensive in server costs and load time.
It seems extremely strange that Active Storage would be set up this way and I can't help but think I'm missing something. Does anyone know of a way to solve either problem?
Here's what I did to fix this:
1- I upload the attachment that the user added directly to my service provider ( I use S3 ).
2- I add an after_commit job that calls a Sidekiq worker to generate the thumbs
3- My sidekiq worker ( AttachmentWorker ) calls my model's generate_thumbs method
4- generate_thumbs will loop through the different sizes that I want to generate for this file
Now, here's the tricky part:
def generate_thumbs
[
{ resize: '300x300^', extent: '300x300', gravity: :center },
{ resize: '600>' }
].each do |size|
self.file_url(size, true)
end
end
def file_url(size, process = false)
value = self.file # where file is my has_one_attached
if size.nil?
url = value
else
url = value.variant(size)
if process
url = url.processed
end
end
return url.service_url
end
In the file_url method, we will only call .processed if we pass process = true. I've experimented a lot with this method to have the best possible performance outcome out of it.
The .processed will check with your bucket if the file exists or not, and if not, it will generate your new file and upload it.
Also, here's another question that I have previously asked concerning ActiveStorage that can also help you: ActiveStorage & S3: Make files public
I absolutely don't know Active Storage. However, a good pattern for your use case is to resize the image when it come in. For this
Let the user store the image in Bucket1
When the file is created in Bucket1, an event is triggered. Plug a function on this event
The Cloud Functions resizes the image and store it into Bucket2
You can delete the image in Bucket1 at the end of the Cloud Function, or keep it few days or move it to cheaper storage (to keep the original image in case of issue). For this last 2 actions, you can use Life Cycle to delete of change the storage class of files.
Note: You can use the same Bucket (instead of Bucket1 and Bucket2), but an event to resize the image will be sent every time that a file is create in the bucket. You can use PubSub as middleware and add filter on it to trigger your function only with the file is created in the correct folder. I wrote an article on this

Will this lua code work to download certain files on my GMOD server

I have recently been building my GMOD server and is slowy getting popular but I was interested to create a addon so I put together something that, should, download some worksop links in loading screen and others ingame. This is what I have created.
sv_auto_download:
// Write the map download codes below
resource.addworkshop( "" )
function DownloadFiles()
// Write the the texture codes below
resource.addworkshop( "" )
return ""
end
hook.Add ( "PlayerInitialSpawn", "DownloadFiles" )
No, this will not work.
Firstly, PlayerInitalSpawn runs after the loading screen and also resource.addworkshop is a server side function that is loaded once so that the server knows to load the workshop files, meaning addons will still be downloaded in the loading screen anyhow.
You can not "download some worksop links in loading screen and others ingame" and you should not force players to download 10gigs of models if they don't want to.
The best way to get players to download addons is through the workshop.
Create a collection on the steam workshop for your addons, for example http://steamcommunity.com/sharedfiles/filedetails/?id=1244735564
Head over to http://steamcommunity.com/dev/apikey and use your server's IP address as the website and save your api key somewhere safe (i.e. don't share it)
Go to your launch options for scrds.exe (either a .bat file or in the server dashboard) and add -authkey 3XAMPL3K3YF0RTH3T3ST3 +host_workshop_collection (collection ID), with the collection ID being the ?id=1244735564 part of the collections URL
Then, players will automatically download server content and it is easy for you to add more addons, with it also serving as a way for players to quickly download large models permanently if they wish to play on your server for an extended period of time.
By the way, you forgot to include the function delegate into the hook.Add:
hook.Add ( "PlayerInitialSpawn", "DownloadFiles", DownloadFiles )

what is the good way to display images after successfully uploading to server, objective-c,ios

Given :
view A ( uitableView ) is used to display all images after you successfully pull them from a server via a request named getAllImages
you can also upload a new photo in view A via a top right button
My Goal :
Display a new set of images ( the new images included) on the table
What I am doing is :
send the request to server for uploading image ( i am using afnetworking to do that)
since server side is only returned "success" or failure" to me without other information. Supposed it is success, I will fire a request to get the new set of images via getAllImages
will invoke reloadData to display a new set of data on the table
I am not sure this is a good way to do it, I am still looking for the best approach to achieve this task. I dont know should I use core data in this task and how to use it.
Please give me any suggestions if you have been experiencing this task. Any comments are appreciated.
Here is what I would do:
1 - call getAllImages to show all N images
2 - take new photo
3 - display N images previously gotten from getAllImages, and 1 local image from step 2
4 - fire asynchronous request (do not specifically remember how do we do that using AFNetworking) to upload image from step 2
5 - if success code, keep N+1 images. If failure code, show only N images and remove the last one.
You can specifically reload only single row using reloadrowatindexpath, without much of a performance hit.

How to export MovieClip to SWF via AS3?

I've wrote some application in Flash cs5, wich allow users to make their own Christmas Cards, but at the end of programming I realized, that I should to provide some function to save user's card to seperate SWF-file...
Please, anyone who knows, help me! I tried to find something in Google, but all what I understand is that I should use ByteArray. But I can't really get, HOW I can use it in my case?
All I have found is this four lines:
var buffer:ByteArray = new ByteArray();
buffer.writeObject(MOVIE_CLIP_HERE);
buffer.position = 0;
buffer.writeBytes(...);
For seniors maybe it can help, but I can't get how with help of this lines I can solve my problem... thank you very much)))
You will need some server-side technology, like PHP or ASP, because Flash Player can't save anything on disk. And if you think about creating a swf file programmatically, that can be very difficult. That being said, this is how I would do this:
First, I would write the movieclip to a ByteArray, just like in your example:
var buffer:ByteArray = new ByteArray();
buffer.writeObject(card_mc);
Then I would send the byte array to a PHP script which would save the data from the byte array in a file (a text file will do). The saved data will actually be your serialized movieclip.
Then, I would create a swf file which will serve as the actual card, but it will be in fact a container for the saved movieclip. This file will load the data from the text file into a ByteArray and deserialize the movieclip:
var loadedClip:MovieClip = MovieClip(byteArray.readObject());
Once you have managed this, you're done. When users save their cards to their computer, you can send them the container swf file and keep the data file on your server (but in this case the swf will need to load the movieclip from your server), or you can give them both files.
I hope this helped.

Resources