Skip to content

Commit

Permalink
Merge branch 'master' of https://github.com/Azure/azure-rest-api-specs
Browse files Browse the repository at this point in the history
…into fix_certainty_swagger

* 'master' of https://github.com/Azure/azure-rest-api-specs:
  Network november release (Azure#13224)
  read replica added (Azure#12567)
  Fix parent class of ClusterResource and DataCenterResource in .NET SDK (Azure#13244)
  Update credential scope for Python. (Azure#13263)
  [Hub Generated] Review request for Face to add version stable/v1.0 (Azure#12739)
  Update Certainty enum (Azure#13247)
  Added Swagger Doc for Settings API (Azure#13241)
  [Hub Generated] Review request for Microsoft.Consumption to add version stable/2019-10-01 (Azure#12822)
  fix web python.md (Azure#13162)
  Peering new api version 2021-01-01 (Azure#12855)
  Update Device Update for IoT Hub control plane autorest file for C# with correct namespace and output folder (Azure#13251)
  update swagger reviews for translator text (Azure#13246)
  [deviceupdate] make changes to readme in time for first release (Azure#13240)
  • Loading branch information
iscai-msft committed Mar 4, 2021
2 parents ec70ad3 + 04d3607 commit cab6650
Show file tree
Hide file tree
Showing 694 changed files with 95,692 additions and 39 deletions.
8 changes: 8 additions & 0 deletions .github/pull_request_assignment.yml
Original file line number Diff line number Diff line change
@@ -1,4 +1,12 @@
---
- rule:
# translator data-plane PR
paths:
- "specification/cognitiveservices/data-plane/TranslatorText/**"
reviewers:
- kristapratico
- maririos

- rule:
# eventgrid data-plane PR
paths:
Expand Down
2 changes: 2 additions & 0 deletions custom-words.txt
Original file line number Diff line number Diff line change
Expand Up @@ -1123,6 +1123,7 @@ mypath
mypicture
mypictures
myregistry
myscope
myshopify
mysite
mysquare
Expand Down Expand Up @@ -1469,6 +1470,7 @@ Reregister
Rescan
reservationorders
resetapikey
resetconnection
resetvpnclientsharedkey
Resolvability
resourcegraph
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -1048,7 +1048,7 @@
},
"/detect": {
"post": {
"description": "Detect human faces in an image, return face rectangles, and optionally with faceIds, landmarks, and attributes.<br />\n* No image will be stored. Only the extracted face feature will be stored on server. The faceId is an identifier of the face feature and will be used in [Face - Identify](../face/identify), [Face - Verify](../face/verifyfacetoface), and [Face - Find Similar](../face/findsimilar). The stored face feature(s) will expire and be deleted at the time specified by faceIdTimeToLive after the original detection call.\n* Optional parameters include faceId, landmarks, and attributes. Attributes include age, gender, headPose, smile, facialHair, glasses, emotion, hair, makeup, occlusion, accessories, blur, exposure and noise. Some of the results returned for specific attributes may not be highly accurate.\n* JPEG, PNG, GIF (the first frame), and BMP format are supported. The allowed image file size is from 1KB to 6MB.\n* Up to 100 faces can be returned for an image. Faces are ranked by face rectangle size from large to small.\n* For optimal results when querying [Face - Identify](../face/identify), [Face - Verify](../face/verifyfacetoface), and [Face - Find Similar](../face/findsimilar) ('returnFaceId' is true), please use faces that are: frontal, clear, and with a minimum size of 200x200 pixels (100 pixels between eyes).\n* The minimum detectable face size is 36x36 pixels in an image no larger than 1920x1080 pixels. Images with dimensions higher than 1920x1080 pixels will need a proportionally larger minimum face size.\n* Different 'detectionModel' values can be provided. To use and compare different detection models, please refer to [How to specify a detection model](https://docs.microsoft.com/azure/cognitive-services/face/face-api-how-to-topics/specify-detection-model).\n\n* Different 'recognitionModel' values are provided. If follow-up operations like Verify, Identify, Find Similar are needed, please specify the recognition model with 'recognitionModel' parameter. The default value for 'recognitionModel' is 'recognition_01', if latest model needed, please explicitly specify the model you need in this parameter. Once specified, the detected faceIds will be associated with the specified recognition model. More details, please refer to [Specify a recognition model](https://docs.microsoft.com/azure/cognitive-services/face/face-api-how-to-topics/specify-recognition-model).",
"description": "Detect human faces in an image, return face rectangles, and optionally with faceIds, landmarks, and attributes.<br />\n* No image will be stored. Only the extracted face feature will be stored on server. The faceId is an identifier of the face feature and will be used in [Face - Identify](../face/identify), [Face - Verify](../face/verifyfacetoface), and [Face - Find Similar](../face/findsimilar). The stored face feature(s) will expire and be deleted at the time specified by faceIdTimeToLive after the original detection call.\n* Optional parameters include faceId, landmarks, and attributes. Attributes include age, gender, headPose, smile, facialHair, glasses, emotion, hair, makeup, occlusion, accessories, blur, exposure, noise, and mask. Some of the results returned for specific attributes may not be highly accurate.\n* JPEG, PNG, GIF (the first frame), and BMP format are supported. The allowed image file size is from 1KB to 6MB.\n* Up to 100 faces can be returned for an image. Faces are ranked by face rectangle size from large to small.\n* For optimal results when querying [Face - Identify](../face/identify), [Face - Verify](../face/verifyfacetoface), and [Face - Find Similar](../face/findsimilar) ('returnFaceId' is true), please use faces that are: frontal, clear, and with a minimum size of 200x200 pixels (100 pixels between eyes).\n* The minimum detectable face size is 36x36 pixels in an image no larger than 1920x1080 pixels. Images with dimensions higher than 1920x1080 pixels will need a proportionally larger minimum face size.\n* Different 'detectionModel' values can be provided. To use and compare different detection models, please refer to [How to specify a detection model](https://docs.microsoft.com/azure/cognitive-services/face/face-api-how-to-topics/specify-detection-model).\n\n* Different 'recognitionModel' values are provided. If follow-up operations like Verify, Identify, Find Similar are needed, please specify the recognition model with 'recognitionModel' parameter. The default value for 'recognitionModel' is 'recognition_01', if latest model needed, please explicitly specify the model you need in this parameter. Once specified, the detected faceIds will be associated with the specified recognition model. More details, please refer to [Specify a recognition model](https://docs.microsoft.com/azure/cognitive-services/face/face-api-how-to-topics/specify-recognition-model).",
"operationId": "Face_DetectWithUrl",
"parameters": [
{
Expand Down Expand Up @@ -2559,7 +2559,7 @@
},
"/detect?overload=stream": {
"post": {
"description": "Detect human faces in an image, return face rectangles, and optionally with faceIds, landmarks, and attributes.<br />\n* No image will be stored. Only the extracted face feature will be stored on server. The faceId is an identifier of the face feature and will be used in [Face - Identify](../face/identify), [Face - Verify](../face/verifyfacetoface), and [Face - Find Similar](../face/findsimilar). The stored face feature(s) will expire and be deleted at the time specified by faceIdTimeToLive after the original detection call.\n* Optional parameters include faceId, landmarks, and attributes. Attributes include age, gender, headPose, smile, facialHair, glasses, emotion, hair, makeup, occlusion, accessories, blur, exposure and noise. Some of the results returned for specific attributes may not be highly accurate.\n* JPEG, PNG, GIF (the first frame), and BMP format are supported. The allowed image file size is from 1KB to 6MB.\n* Up to 100 faces can be returned for an image. Faces are ranked by face rectangle size from large to small.\n* For optimal results when querying [Face - Identify](../face/identify), [Face - Verify](../face/verifyfacetoface), and [Face - Find Similar](../face/findsimilar) ('returnFaceId' is true), please use faces that are: frontal, clear, and with a minimum size of 200x200 pixels (100 pixels between eyes).\n* The minimum detectable face size is 36x36 pixels in an image no larger than 1920x1080 pixels. Images with dimensions higher than 1920x1080 pixels will need a proportionally larger minimum face size.\n* Different 'detectionModel' values can be provided. To use and compare different detection models, please refer to [How to specify a detection model](https://docs.microsoft.com/azure/cognitive-services/face/face-api-how-to-topics/specify-detection-model)\n* Different 'recognitionModel' values are provided. If follow-up operations like Verify, Identify, Find Similar are needed, please specify the recognition model with 'recognitionModel' parameter. The default value for 'recognitionModel' is 'recognition_01', if latest model needed, please explicitly specify the model you need in this parameter. Once specified, the detected faceIds will be associated with the specified recognition model. More details, please refer to [Specify a recognition model](https://docs.microsoft.com/azure/cognitive-services/face/face-api-how-to-topics/specify-recognition-model).",
"description": "Detect human faces in an image, return face rectangles, and optionally with faceIds, landmarks, and attributes.<br />\n* No image will be stored. Only the extracted face feature will be stored on server. The faceId is an identifier of the face feature and will be used in [Face - Identify](../face/identify), [Face - Verify](../face/verifyfacetoface), and [Face - Find Similar](../face/findsimilar). The stored face feature(s) will expire and be deleted at the time specified by faceIdTimeToLive after the original detection call.\n* Optional parameters include faceId, landmarks, and attributes. Attributes include age, gender, headPose, smile, facialHair, glasses, emotion, hair, makeup, occlusion, accessories, blur, exposure, noise, and mask. Some of the results returned for specific attributes may not be highly accurate.\n* JPEG, PNG, GIF (the first frame), and BMP format are supported. The allowed image file size is from 1KB to 6MB.\n* Up to 100 faces can be returned for an image. Faces are ranked by face rectangle size from large to small.\n* For optimal results when querying [Face - Identify](../face/identify), [Face - Verify](../face/verifyfacetoface), and [Face - Find Similar](../face/findsimilar) ('returnFaceId' is true), please use faces that are: frontal, clear, and with a minimum size of 200x200 pixels (100 pixels between eyes).\n* The minimum detectable face size is 36x36 pixels in an image no larger than 1920x1080 pixels. Images with dimensions higher than 1920x1080 pixels will need a proportionally larger minimum face size.\n* Different 'detectionModel' values can be provided. To use and compare different detection models, please refer to [How to specify a detection model](https://docs.microsoft.com/azure/cognitive-services/face/face-api-how-to-topics/specify-detection-model)\n* Different 'recognitionModel' values are provided. If follow-up operations like Verify, Identify, Find Similar are needed, please specify the recognition model with 'recognitionModel' parameter. The default value for 'recognitionModel' is 'recognition_01', if latest model needed, please explicitly specify the model you need in this parameter. Once specified, the detected faceIds will be associated with the specified recognition model. More details, please refer to [Specify a recognition model](https://docs.microsoft.com/azure/cognitive-services/face/face-api-how-to-topics/specify-recognition-model).",
"operationId": "Face_DetectWithStream",
"parameters": [
{
Expand Down Expand Up @@ -3058,7 +3058,7 @@
"$ref": "#/definitions/Hair"
},
"makeup": {
"description": "Properties describing present makeups on a given face.",
"description": "Properties describing the presence of makeup on a given face.",
"$ref": "#/definitions/Makeup"
},
"occlusion": {
Expand All @@ -3080,6 +3080,10 @@
"noise": {
"description": "Properties describing noise level of the image.",
"$ref": "#/definitions/Noise"
},
"mask": {
"description": "Properties describing the presence of a mask on a given face.",
"$ref": "#/definitions/Mask"
}
}
},
Expand Down Expand Up @@ -3215,7 +3219,7 @@
},
"Makeup": {
"type": "object",
"description": "Properties describing present makeups on a given face.",
"description": "Properties describing the presence of makeup on a given face.",
"properties": {
"eyeMakeup": {
"type": "boolean",
Expand Down Expand Up @@ -3357,6 +3361,32 @@
}
}
},
"Mask": {
"type": "object",
"description": "Properties describing the presence of a mask on a given face.",
"properties": {
"type": {
"type": "string",
"description": "Mask type if any of the face",
"x-nullable": false,
"x-ms-enum": {
"name": "MaskType",
"modelAsString": false
},
"enum": [
"noMask",
"faceMask",
"otherMaskOrOcclusion",
"uncertain"
]
},
"noseAndMouthCovered": {
"type": "boolean",
"description": "A boolean value indicating whether nose and mouth are covered.",
"x-nullable": false
}
}
},
"FindSimilarRequest": {
"type": "object",
"required": [
Expand Down Expand Up @@ -3934,7 +3964,8 @@
"enum": [
"recognition_01",
"recognition_02",
"recognition_03"
"recognition_03",
"recognition_04"
]
},
"ApplyScope": {
Expand Down Expand Up @@ -4139,7 +4170,7 @@
"returnFaceAttributes": {
"name": "returnFaceAttributes",
"in": "query",
"description": "Analyze and return the one or more specified face attributes in the comma-separated string like \"returnFaceAttributes=age,gender\". Supported face attributes include age, gender, headPose, smile, facialHair, glasses and emotion. Note that each face attribute analysis has additional computational and time cost.",
"description": "Analyze and return the one or more specified face attributes in the comma-separated string like \"returnFaceAttributes=age,gender\". The available attributes depends on the 'detectionModel' specified. 'detection_01' supports age, gender, headPose, smile, facialHair, glasses, emotion, hair, makeup, occlusion, accessories, blur, exposure, and noise. While 'detection_02' does not support any attributes and 'detection_03' only supports mask. Note that each face attribute analysis has additional computational and time cost.",
"type": "array",
"x-ms-parameter-location": "method",
"required": false,
Expand All @@ -4165,7 +4196,8 @@
"accessories",
"blur",
"exposure",
"noise"
"noise",
"mask"
]
}
},
Expand Down Expand Up @@ -4318,7 +4350,8 @@
"enum": [
"recognition_01",
"recognition_02",
"recognition_03"
"recognition_03",
"recognition_04"
]
},
"returnRecognitionModel": {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -1884,9 +1884,9 @@
"type": "string",
"enum": [
"positive",
"positive possible",
"neutral possible",
"negative possible",
"positivepossible",
"neutralpossible",
"negativepossible",
"negative"
],
"x-ms-enum": {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -4691,6 +4691,11 @@
"description": "Operation type: Read, write, delete, etc.",
"type": "string",
"readOnly": true
},
"description": {
"description": "Description of the operation.",
"type": "string",
"readOnly": true
}
}
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -771,7 +771,6 @@
},
"ClusterResource": {
"description": "Representation of a managed Cassandra cluster.",
"x-ms-azure-resource": true,
"type": "object",
"allOf": [
{
Expand Down Expand Up @@ -991,7 +990,7 @@
"type": "object",
"allOf": [
{
"$ref": "../../../../../common-types/resource-management/v2/types.json#/definitions/ProxyResource"
"$ref": "cosmos-db.json#/definitions/ARMProxyResource"
}
],
"properties": {
Expand Down Expand Up @@ -1056,10 +1055,9 @@
"DataCenterResource": {
"description": "A managed Cassandra data center.",
"type": "object",
"x-ms-azure-resource": true,
"allOf": [
{
"$ref": "../../../../../common-types/resource-management/v2/types.json#/definitions/ProxyResource"
"$ref": "cosmos-db.json#/definitions/ARMProxyResource"
}
],
"properties": {
Expand Down
Loading

0 comments on commit cab6650

Please sign in to comment.