Description
openedon Nov 9, 2016
In some contexts it seems to make sense to have validation occur on the server (and not on the client). What I mean by validation is things like int foo
must be greater than 5
. Currently if you specify a minimum
in Swagger, AutoRest generates code on the client which enforces that minimum and maximum.
There are upsides to enforcement on the client, for example it may improve unit testability and saves a round trip call to the service.
But the downside of enforcement on the client is that the server can change its behavior and the client cannot make use of it.
For example consider the following:
An operation List_Foos
specifies a "minimum": 1
and "maximum": 10
for it's parameter maxResults
At some point in the future, the service that controls List_Foos
overcomes a performance bottleneck and now they can process Foos
much faster, so they bump the maxResults
maximum to 100
. There are a couple of possibilities here:
- They only increase the maximum in a new REST API version.
- They increase the maximum across all existing REST API versions (what previously had a max of
10
now has a max of100
).
It seems like option 2 above is valid in at least some cases (causes goodness for clients and what seems to be no pain, since the behavior is identical as to what it as before unless clients start opting into the new limits). But what should the Swagger specification look like? Should services which might one day possibly change the bounds of a value in a nonbreaking way just never list the mins/maxs (for example) in their Swagger spec? Should services just never do option 2 above?
One possible alternative might be to allow min/max to not be enforced on the client through the use of some flag, so that they could be there as an "FYI" but not actually enforced until the client hits the server.