This API serves as an endpoint for making predictions using TensorFlow Lite and TensorFlow models. It provides two accessible routes.
This route is used to predict a place based on the input text using the TensorFlow Lite model.
POST /predict_place
| Name | Type | Description | 
|---|---|---|
| req | JSON | Request object containing text data | 
{
  "data": "category"
}| Name | Type | Description | 
|---|---|---|
| status | string | Response status ("success" if successful) | 
| error | bool | Indicates if an error occurred or not | 
| result | array | Prediction result (string array) | 
{
  "status": "success",
  "error": false,
  "result": [
    "Place 1",
    "Place 2",
    "Place 3"
  ]
}This route is used to predict multiple places based on a list of input texts using the TensorFlow model.
POST /predict_places
| Name | Type | Description | 
|---|---|---|
| req | JSON | Request object containing list of text data | 
{
  "data": [
    "category1",
    "category2",
    "category3"
  ]
}| Name | Type | Description | 
|---|---|---|
| status | string | Response status ("success" if successful) | 
| error | bool | Indicates if an error occurred or not | 
| result | array | Prediction results (string array) | 
{
  "status": "success",
  "error": false,
  "result": [
    "Place 1",
    "Place 2",
    "Place 3"
  ]
}If an error occurs during the prediction process, the API will respond with a status code of 500 and the message "Internal Server Error".