While creating configuration profile say using iphone Configuration Utility, there is no limit in the character length for Name and Identifier fields.
But i have a requirement where i have to collect details about the configuration profile and store it into my DB. And in the DB i have to set the length to these fields.
Theorotically can someone suggest a meaningful length for these fields.
Regards
Usually ios using coredata DB, where fields are not required length so if you using coredta then no need to worry about length of name.
But if you are using other DB then it is depend on you how much length of name you want to store.
Related
I am used to Microsoft SQL server, but I have a task requiring use of a clients Influx server
From what I understand of the schema, and what I've been able to explore with the CLI, once you select a database, you can view all the field keys in that database, but there are not field keys that are exclusive to each measurement.
Is this true? If not, I will make a second question regarding how to access the field keys that ARE exclusive to a measurement and I will link it immediately following an answer.
Relevant links are
https://docs.influxdata.com/influxdb/v1.7/concepts/crosswalk/
https://docs.influxdata.com/influxdb/v1.8/query_language/explore-schema/
I have tried using client.query("SHOW FIELD KEYS").get_points() and it returns all field keys in all measurements in the database, since the client is only connected at the database level. But there does not seem to be a "use " options, like exists for "use "
Yes, each measurement has its own columns or field keys.
You can get the columns that are relevant to a single measurement by using
columns = client.query("SHOW FIELD KEYS").get_points(measurement = measurement_selection)
where measurement_selection is a string that matches the name of the measurement you wish to observe
I am trying to understand the differences between a particular record stored in a sessions table in the database, vs session information stored in a sessions cookie. There is a part in the activerecord-session_store documentation that is confusing to me. Documentation is at: https://github.com/rails/activerecord-session_store
So for whatever reason I want to have a sessions table instead of just using the sessions cookie. I add the gem: gem "activerecord-session_store". I then do rails generate active_record:session_migration which creates the migration that builds the session table in the database once I rake db:migrate.
That sessions table holds two main columns: session_id(which is of type string) and data (which is of type text).
First Question: session_id? what exactly is this referring to? Is the session_id equal to the primary key: id?
My second question revolves around the documentation notes for the column: data. This column is of type text. According to https://msdn.microsoft.com/en-us/library/ms187993.aspx the text datatype's maximum size is 2,147,483,647 bytes, so I would assume that this is the maximum size of bytes that this column can hold. However, the activerecord-session_store documentation states:
data (text or longtext; careful if your session data exceeds 65KB).
It goes on to say this:
If the data you write is larger than the column's size limit, ActionController::SessionOverflowError will be raised.
Second Question: Why is the data column limited to 65KB when the data type text can hold 2,147,483,647 bytes? I thought that one of the main reasons why I might want a sessions table is because I want to store more stuff than a sessions cookie can store (which is 4093 bytes).
Third Question: How do I make it so that the data column can store more than 65KB of info?
Fourth Question: active_record-session_store appears to only encode the data. Is it safe that the data is encoded as opposed to encrypted because the sessions table is located on my server as opposed to in a user's cookie? Is it necessary to encrypt the session's data?
first question: No, session_id and id are not the same (although you can configure it to be the same, as described in the activerecord-session_store documentation).
second question: 65kB is the conventional max. size for text columns - see here. It changes to longtext if more than 65kB are stored (at least i understand it this way, but haven't tried).
third question: see second answer, although i'm not completely sure. I think the more important question is: Why would you store more? ;)
fourth: encoding does not happen because of safety reasons. The data is encoded...
...to store the widest range of binary session data in a text column
(according to this)
Hope this helps!
Internally, my website stores Users in a database indexed by an integer primary key.
However, I'd like to associate Users with a number of unique, difficult-to-guess identifiers that will be each used in various circumstance. Examples:
One for a user profile URL: So a User can be found and displayed by a URL that does not include their actual primary key, preventing the profiles from being scraped.
One for a no-login email unsubscribe form: So a user can change their email preferences by clicking through a link in the email without having to login, preventing other people from being able to easily guess the URL and tamper with their email preferences.
As I see it, the key characteristics I'll need for these identifiers is that they are not easily guessed, that they are unique, and that knowing the key or identifier will not make it easy to find the other.
In light of that, I was thinking about using SecureRandom::urlsafe_base64 to generate multiple random identifiers whenever a new user is created, one for each purpose. As they are random, I would need to do database checks before insertion in order to guarantee uniqueness.
Could anyone provide a sanity check and confirm that this is a reasonable approach?
The method you are using is using a secure random generator, so guessing the next URL even knowing one of them will be hard. When generating random sequences, this is a key aspect to keep in mind: non-secure random generators can become predictable, and having one value can help predict what the next one would be. You are probably OK on this one.
Also, urlsafe_base64 says in its documentation that the default random length is 16 bytes. This gives you 816 different possible values (2.81474977 × 1014). This is not a huge number. For example, it means that a scraper doing 10.000 request a second will be able to try all possible identifiers in about 900 years. It seems acceptable for now, but computers are becoming faster and faster, and depending on the scale of your application this could be a problem in the future. Just making the first parameter bigger can solve this issue though.
Lastly, something that you should definitely consider: the possibility for your database to be leaked. Even if your identifiers are bullet proof, your database might not be and an attacker might be able to get a list of all identifiers. You should definitely hash the identifiers in the database with a secure hashing algorithm (with appropriate salts, the same you would do for a password). Just to give you an idea on how important this is, with a recent GPU, SHA-1 can be brute forced at a rate of 350.000.000 tries per second. A 16 bytes key (the default for the method you are using) hashed using SHA-1 would be guessed in about 9 days.
In summary: the algorithm is good enough, but increase the length of keys and hash them in the database.
Because the generated ids will not be related to any other data, they are going to be very hard (impossible) to guess. To quickly validate there uniqueness and find users, you'll have to index them in the DB.
You'll also need to write a function that returns a unique id checking the uniqueness, something like:
def generate_id(field_name)
found = false
while not found
rnd = SecureRandom.urlsafe_base64
found = User.exists?(field_name: rnd)
end
rnd
end
Last security check, try to check the correspondance between an identifier and the user information before doing any changes, at least the email.
That said, it seems a good approach to me.
My team is working on an application with a legacy database that uses two different values as unique identifiers for a Group object: Id is an auto-incrementing Identity column whose value is determined by the database upon insertion. GroupCode is determined by the application after insertion, and is "Group" + theGroup.Id.
What we need is an algorithm to generate GroupCode's that:
Are unique.
Are reasonably easy for a user to type in correctly.
Are difficult for a hacker to guess.
Are either created by the database upon insertion, or are created by the app before the insertion (i.e. not dependent on the identity column).
The existing solution meets the first two criteria, but not the last two. Does anyone know of a good solution to meet all of the above criteria?
One more note: Even though this code is used externally by users, and even though Id would make a better identifier for other tables to link their foreign keys to, the GroupCode is used by other tables to refer to a specific Group.
Thanks in advance.
Would it be possible to add a new column? It could consist of the Identity and a random 32-bit number.
That 64 bit number could then be translated to a «Memorable Random String». It wouldn't be perfect security wise but could be good enough.
Here's an example using Ruby and the Koremutake gem.
require 'koremu'
# http://pastie.org/96316 adds Array.chunk
identity=104711
r=rand(2**32)<<32 # in this example 5946631977955229696
ka = KoremuFixnum.new(r+identity).to_ka.chunk(3)
ka.each {|arr| print KoremuArray.new(arr).to_ks + " "}
Result:
TUSADA REGRUMI LEBADE
Also check out Phonetically Memorable Password Generation Algorithms.
Have you looked into Base32/Base36 content encoding? Base32 representation of a Identity seed column will make it unique, easy to enter but definitely not secure. However most non-programmers will have no idea how the string value is generated.
Also using Base32/36 you can maintain normal database integer based primary keys.
assume a data structure Person used for a contact database. The fields of the structure should be configurable, so that users can add user defined fields to the structure and even change existing fields. So basically there should be a configuration file like
FieldNo FieldName DataType DefaultValue
0 Name String ""
1 Age Integer "0"
...
The program should then load this file, manage the dynamic data structure (dynamic not in a "change during runtime" way, but in a "user can change via configuration file" way) and allow easy and type-safe access to the data fields.
I have already implemented this, storing information about each data field in a static array and storing only the changed values in the objects.
My question: Is there any pattern describing that situation? I guess that I'm not the first one running into the problem of creating a user-adjustable class?
Thanks in advance. Tell me if the question is not clear enough.
I've had a quick look through "Patterns of Enterprise Application Architecture" by Martin Folwer and the Metadata Mapping pattern describes (at quick glance) what you are describing.
An excerpt...
"A Metadata Mapping allows developers to define the mappings in a simple tabular form, which can then be processed bygeneric code to carry out the details of reading, inserting and updating the data."
HTH
I suggest looking at the various Object-Relational pattern in Martin Fowler's Patterns of Enterprise Application Architecture available here. This is a list of patterns it covers here.
The best fit to your problem appears to be metadata mapping here. There are other patterns, Mapper, etc.
The normal way to handle this is for the class to have a list of user-defined records, each of which consists of list of user-defined fields. The configuration information forc this can easily be stored in a database table containing the a type id, field type etc, The actual data is then stored in a simple table with the data represented only as (objectid + field index)/string pairs - you convert the strings to and from the real type when you read or write the database.