-
Notifications
You must be signed in to change notification settings - Fork 14
Value generator document #22
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
While it may make sense for our use cases to only generate for required fields I would prefer we do not force this on all clients. We have other constraints that can be used to mark a field as required. Field value generation would be better off as independent. |
I'll pick on the integer generator, but some of this is applicable to any of them
You're making a new field that is part of the field, not a new constraint. Additionally, we have a requirement to start generation above a specific number. The type generated is implied by the field's type,
For string, the uid behavior is good. I would probably be explicit in the type of generation though:
And for date there's a need to specify what date to use. The current use case is current date/time:
Long term we might consider adding more to the generated constraint, at least for integer:
|
What exactly does this do? Generate a random integer above 150000? "constraints": { On Mon, Mar 30, 2015 at 12:31 PM, Naveen Malik notifications@github.com wrote:
|
@bserdar yes |
Is there a use case for a generated random non-unique value of any type? On Mon, Mar 30, 2015 at 12:52 PM, Naveen Malik notifications@github.com
|
Constraints prevent you from doing/force you to do something. A generator is different - even if you specify a minimum for a generated integer, it will not prevent you from persisting a value below that minimum. In my mind generators and constraints are independent entities. In your examples, you are specifying generation properties in the schema. I placed them in entityInfo section, because I think generators should be reusable, like constants. |
@bserdar I can't think of any use case for random non-unique types. Only non-unique example I can think of is date, which is not random. @paterczm you had the generator definition under "more ideas" so I didn't give it much weight in my comments. If it's the way generators are defined it shouldn't be just an idea, it's part of the spec. Agree generators are not equal to constraints. Is there value in reusable generator definitions? Would the generator definition really be global to all versions of metadata or could it change per version? |
Since having non-unique random numbers is not really useful, why don't we
On Tue, Mar 31, 2015 at 7:14 AM, Naveen Malik notifications@github.com
|
I can think of only 2 use cases right now (defined at the top of the document). I proposed a minimum set of changes to cover them. The 'more ideas' section covers everything I cold think of for lightblue-platform/lightblue-core#204. I think we should have a design for everything which is potentially useful, but focus on those 2 use cases for now, especially since this is blocking the terms work. Sorry for not making this clear enough. I will update the document.
I think so. StringId generator (to replace uid type) will be reusable, same thing with integerId and currentDate.
Add the very bottom, in the 'Further ideas' section, I proposed: 'allow generator properties to be overridden in the schema'. If there is a need, we can make them versioned, though I don't see such a need right now. |
Wouldn't that make it harder to keep Lightblue db agnostic? I don't think there is a common sequence api for all rdbms databases. |
The actual implementation of sequence would be in a Dialect class that On Tue, Mar 31, 2015 at 8:19 AM, Marek notifications@github.com wrote:
|
Given there is not a requirement for an actual sequence of identifiers, why not just generate them? The benefit of the sequence in RDBMS is you don't have to worry about uniqueness violation. For mongo we may have issues given lack of transactions and that "update" in lightblue is not atomic. Unless we go against the db directly. I just don't see a reason to do the work to support a true sequence when there isn't a requirement for it. |
Let's finish this design, it has become a blocker again. I see following open questions:
|
@paterczm re: my comment If we do random values for the numeric generation we have to deal with collision. With a incrementing sequence this isn't a problem. If definition is in the entityInfo it's not versioned. I think this makes sense. Meaning, if you change how the values are generated it impacts all versions of the entity. There shouldn't be variance in how things are generated probably. This would be the expectation in a sequence or trigger based approach in RDBMS at least. For random vs sequence, what is easiest? If it's sequence then we could reduce updates to the sequence doc by grabbing a block of numbers from the sequence instead of one at a time. This is something RDBMS sequences support as well. It's a trade off of managing a set of values that have been consumed already in the sequence doc vs having to deal with unique key violations if using a random value. Which is easier? |
Sequences can be done the same way as the synchronization apis, as an
|
Ok On Wed, Jul 22, 2015 at 10:39 PM, Burak Serdar notifications@github.com
|
@paterczm is this PR still valid? It's been put into the user guide http://docs.lightblue.io/cookbook/value_generators.html |
10 years later, probably safe to close? 😆 |
This is a document outlining value generator. I created a pull request to gather feedback.
Issue: lightblue-platform/lightblue-core#204
Internal discussion: https://mojo.redhat.com/message/957916