Skip to content

doc/inference api #11332

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 7 commits into from
Jun 21, 2018
Merged

doc/inference api #11332

merged 7 commits into from
Jun 21, 2018

Conversation

Superjomn
Copy link
Contributor

No description provided.

@Superjomn Superjomn requested review from panyx0718 and luotao1 June 10, 2018 10:54
};

struct PaddleBuf {
void* data; // pointer to the data memory.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

need to document the data layout. Is it row-major?

The main interface is `PaddlePredictor`, there are following methods

- `bool Run(const std::vector<PaddleTensor>& inputs, std::vector<PaddleTensor>* output_data)`
- take inputs and output `output_data`
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Document pointer ownership?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Document thread-safety?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The return is a unique_ptr, the ownership will transparently transfer.

Copy link
Contributor

@luotao1 luotao1 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high_level_api.md 需要挪到doc/fluid下面么?

# Inference High-level APIs
This document describes the high-level inference APIs one can use to easily deploy a Paddle model for an application.

The APIs are described in `paddle_inference_api.h`, just one header file, and two libaries `libpaddle_fluid.so` and `libpaddle_fluid_api.so` are needed.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

对api来说,提供一个libpaddle_fluid_api.so是否就可以了?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

可以编译进去吗,我之前发现不行,还需要 paddle_fluid.so


## PaddleTensor
Forget the hard-to-understand `LoDTensor`,
we provide the `PaddleTensor` data structure is to give a general tensor interface.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

8行有两个谓语 provide和 is


The data is stored in a continuous memory `PaddleBuf`, and tensor's data type is specified by a `PaddleDType`.
The `name` field is used to specify the name of input variable,
that is important when there are multiple inputs and need to distiuish which variable to set the content.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

which variable to set the content 缺少谓语

that is important when there are multiple inputs and need to distiuish which variable to set the content.

## engine
The inference APIs has different underlying implementation, currently there are two valid engines:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

have different underlying implementations

## engine
The inference APIs has different underlying implementation, currently there are two valid engines:

- the native engine, which is consists of the native operators and framework,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is consists of 语法请修正

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这句没问题啊

The inference APIs has different underlying implementation, currently there are two valid engines:

- the native engine, which is consists of the native operators and framework,
- the Anakin engine, which is a Anakin library embeded.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这里可以给个Anakin的链接


The native engine takes a native Paddle model as input, and supports any model that trained by Paddle.

The Anakin engine can only take the Anakin model as input(user need to manully transform the format first) and currently not all Paddle models are supported.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

41行需要放到38行后面,43行需要放到39行后面么?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

- take inputs and output `output_data`
- `Clone` to clone a predictor from an existing one, with model parameter shared.

There is a factory method to help create a predictor
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

缺句号。

## Reference

- [paddle_inference_api.h](./paddle_inference_api.h)
- [demos](./demo)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

在另外一个 pr 里加

The APIs are described in `paddle_inference_api.h`, just one header file, and two libaries `libpaddle_fluid.so` and `libpaddle_fluid_api.so` are needed.

## PaddleTensor
Forget the hard-to-understand `LoDTensor`,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can remove this line

@Superjomn Superjomn merged commit bcea248 into PaddlePaddle:develop Jun 21, 2018
@Superjomn Superjomn deleted the doc/inference-api branch June 21, 2018 09:15
reyoung pushed a commit to reyoung/Paddle that referenced this pull request Jun 30, 2018
Superjomn pushed a commit that referenced this pull request Jun 30, 2018
* doc/inference api (#11332)

* inference API init cn (#11731)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants