Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

consistent_tensor_infer_cache: fix memory leak #5938

Conversation

PragmaTwice
Copy link
Contributor

@PragmaTwice PragmaTwice commented Aug 18, 2021

detected by clang static analyzer:

/home/twice/project/oneflow/oneflow/core/framework/consistent_tensor_infer_cache.cpp:227:36: warning: Potential leak of memory pointed to by 'result' [clang-analyzer-cplusplus.NewDeleteLeaks]
    const auto& nd_sbp = SymbolOf(*JUST(op->NdSbp4BnInOp(ibn)));
                                   ^
/home/twice/project/oneflow/oneflow/core/common/maybe.h:277:40: note: expanded from macro 'JUST'
      auto* stack_frame = maybe.error()->add_stack_frame();       \
                                       ^
/home/twice/project/oneflow/oneflow/core/framework/consistent_tensor_infer_cache.cpp:278:3: note: Taking true branch
  if (iter == cache_.end()) {
  ^
/home/twice/project/oneflow/oneflow/core/framework/consistent_tensor_infer_cache.cpp:280:5: note: Taking false branch
    CHECK_OR_RETURN(static_cast<bool>(user_op_expr));
    ^
/home/twice/project/oneflow/oneflow/core/common/maybe.h:323:3: note: expanded from macro 'CHECK_OR_RETURN'
  if (!(expr))                                                                     \
  ^
/home/twice/project/oneflow/oneflow/core/framework/consistent_tensor_infer_cache.cpp:281:44: note: Calling 'ConsistentTensorInferCache::Infer'
    const auto& output_tensor_metas = JUST(Infer(*user_op_expr, infer_args));
                                           ^
/home/twice/project/oneflow/oneflow/core/common/maybe.h:275:52: note: expanded from macro 'JUST'
    auto&& maybe = __MaybeErrorStackCheckWrapper__(__VA_ARGS__);  \
                                                   ^
/home/twice/project/oneflow/oneflow/core/common/maybe.h:261:46: note: expanded from macro '__MaybeErrorStackCheckWrapper__'
#define __MaybeErrorStackCheckWrapper__(...) __VA_ARGS__
                                             ^
/home/twice/project/oneflow/oneflow/core/framework/consistent_tensor_infer_cache.cpp:186:3: note: Assuming the condition is true
  CHECK_GT_OR_RETURN(infer_args.input_consistent_tensor_metas().size(), 0);
  ^
/home/twice/project/oneflow/oneflow/core/common/maybe.h:334:19: note: expanded from macro 'CHECK_GT_OR_RETURN'
  CHECK_OR_RETURN((lhs) > (rhs)) << "(" << (lhs) << " vs " << (rhs) << ") "
                  ^
/home/twice/project/oneflow/oneflow/core/common/maybe.h:323:9: note: expanded from macro 'CHECK_OR_RETURN'
  if (!(expr))                                                                     \
        ^
/home/twice/project/oneflow/oneflow/core/framework/consistent_tensor_infer_cache.cpp:186:3: note: Taking false branch
  CHECK_GT_OR_RETURN(infer_args.input_consistent_tensor_metas().size(), 0);
  ^
/home/twice/project/oneflow/oneflow/core/common/maybe.h:334:3: note: expanded from macro 'CHECK_GT_OR_RETURN'
  CHECK_OR_RETURN((lhs) > (rhs)) << "(" << (lhs) << " vs " << (rhs) << ") "
  ^
/home/twice/project/oneflow/oneflow/core/common/maybe.h:323:3: note: expanded from macro 'CHECK_OR_RETURN'
  if (!(expr))                                                                     \
  ^
/home/twice/project/oneflow/oneflow/core/framework/consistent_tensor_infer_cache.cpp:189:3: note: Taking false branch
  JUST(CheckInputParallelDescIdentical(infer_args));
  ^
/home/twice/project/oneflow/oneflow/core/common/maybe.h:276:5: note: expanded from macro 'JUST'
    if (!maybe.IsOk()) {                                          \
    ^
/home/twice/project/oneflow/oneflow/core/framework/consistent_tensor_infer_cache.cpp:190:3: note: Taking false branch
  JUST(CheckIsDeviceSupportedByOp(*parallel_desc, user_op_expr.op_type_name()));
  ^
/home/twice/project/oneflow/oneflow/core/common/maybe.h:276:5: note: expanded from macro 'JUST'
    if (!maybe.IsOk()) {                                          \
    ^
/home/twice/project/oneflow/oneflow/core/framework/consistent_tensor_infer_cache.cpp:195:5: note: Assuming the condition is false
    JUST(user_op_expr.InferLogicalShapeAndDType(
    ^
/home/twice/project/oneflow/oneflow/core/common/maybe.h:276:9: note: expanded from macro 'JUST'
    if (!maybe.IsOk()) {                                          \
        ^
/home/twice/project/oneflow/oneflow/core/framework/consistent_tensor_infer_cache.cpp:195:5: note: Taking false branch
    JUST(user_op_expr.InferLogicalShapeAndDType(
    ^
/home/twice/project/oneflow/oneflow/core/common/maybe.h:276:5: note: expanded from macro 'JUST'
    if (!maybe.IsOk()) {                                          \
    ^
/home/twice/project/oneflow/oneflow/core/framework/consistent_tensor_infer_cache.cpp:200:20: note: Taking false branch
  const auto& op = JUST(MakeOp(user_op_expr, infer_args.attrs(), parallel_desc->device_tag()));
                   ^
/home/twice/project/oneflow/oneflow/core/common/maybe.h:276:5: note: expanded from macro 'JUST'
    if (!maybe.IsOk()) {                                          \
    ^
/home/twice/project/oneflow/oneflow/core/framework/consistent_tensor_infer_cache.cpp:201:3: note: Assuming the condition is false
  JUST(op->FillOpParallelDesc(parallel_desc.shared_from_symbol()));
  ^
/home/twice/project/oneflow/oneflow/core/common/maybe.h:276:9: note: expanded from macro 'JUST'
    if (!maybe.IsOk()) {                                          \
        ^
/home/twice/project/oneflow/oneflow/core/framework/consistent_tensor_infer_cache.cpp:201:3: note: Taking false branch
  JUST(op->FillOpParallelDesc(parallel_desc.shared_from_symbol()));
  ^
/home/twice/project/oneflow/oneflow/core/common/maybe.h:276:5: note: expanded from macro 'JUST'
    if (!maybe.IsOk()) {                                          \
    ^
/home/twice/project/oneflow/oneflow/core/framework/consistent_tensor_infer_cache.cpp:205:5: note: Taking false branch
    JUST(infer_args.MakeNdSbpConstraints(user_op_expr, &nd_sbp_constraints));
    ^
/home/twice/project/oneflow/oneflow/core/common/maybe.h:276:5: note: expanded from macro 'JUST'
    if (!maybe.IsOk()) {                                          \
    ^
/home/twice/project/oneflow/oneflow/core/framework/consistent_tensor_infer_cache.cpp:207:5: note: Taking false branch
    JUST(infer_args.MakeInputBlobDescs(user_op_expr, &blob_descs));
    ^
/home/twice/project/oneflow/oneflow/core/common/maybe.h:276:5: note: expanded from macro 'JUST'
    if (!maybe.IsOk()) {                                          \
    ^
/home/twice/project/oneflow/oneflow/core/framework/consistent_tensor_infer_cache.cpp:209:5: note: Taking false branch
    JUST(infer_args.MakeNdSbpInferHints(user_op_expr, blob_descs, &pd_infer_hints));
    ^
/home/twice/project/oneflow/oneflow/core/common/maybe.h:276:5: note: expanded from macro 'JUST'
    if (!maybe.IsOk()) {                                          \
    ^
/home/twice/project/oneflow/oneflow/core/framework/consistent_tensor_infer_cache.cpp:218:5: note: Assuming the condition is false
    JUST(op->InferNdSbpSignatureIf(nd_sbp_constraints, *parallel_desc, NdSbpInferHint4Ibn));
    ^
/home/twice/project/oneflow/oneflow/core/common/maybe.h:276:9: note: expanded from macro 'JUST'
    if (!maybe.IsOk()) {                                          \
        ^
/home/twice/project/oneflow/oneflow/core/framework/consistent_tensor_infer_cache.cpp:218:5: note: Taking false branch
    JUST(op->InferNdSbpSignatureIf(nd_sbp_constraints, *parallel_desc, NdSbpInferHint4Ibn));
    ^
/home/twice/project/oneflow/oneflow/core/common/maybe.h:276:5: note: expanded from macro 'JUST'
    if (!maybe.IsOk()) {                                          \
    ^
/home/twice/project/oneflow/oneflow/core/framework/consistent_tensor_infer_cache.cpp:221:7: note: Memory is allocated
      new ConsistentTensorInferResult(user_op_expr.input_size(), user_op_expr.output_size());
      ^
/home/twice/project/oneflow/oneflow/core/framework/consistent_tensor_infer_cache.cpp:223:23: note: Assuming the condition is true
  for (int32_t i = 0; i < user_op_expr.input_size(); ++i) {
                      ^
/home/twice/project/oneflow/oneflow/core/framework/consistent_tensor_infer_cache.cpp:223:3: note: Loop condition is true.  Entering loop body
  for (int32_t i = 0; i < user_op_expr.input_size(); ++i) {
  ^
/home/twice/project/oneflow/oneflow/core/framework/consistent_tensor_infer_cache.cpp:227:36: note: Assuming the condition is true
    const auto& nd_sbp = SymbolOf(*JUST(op->NdSbp4BnInOp(ibn)));
                                   ^
/home/twice/project/oneflow/oneflow/core/common/maybe.h:276:9: note: expanded from macro 'JUST'
    if (!maybe.IsOk()) {                                          \
        ^
/home/twice/project/oneflow/oneflow/core/framework/consistent_tensor_infer_cache.cpp:227:36: note: Taking true branch
    const auto& nd_sbp = SymbolOf(*JUST(op->NdSbp4BnInOp(ibn)));
                                   ^
/home/twice/project/oneflow/oneflow/core/common/maybe.h:276:5: note: expanded from macro 'JUST'
    if (!maybe.IsOk()) {                                          \
    ^
/home/twice/project/oneflow/oneflow/core/framework/consistent_tensor_infer_cache.cpp:227:36: note: Potential leak of memory pointed to by 'result'
    const auto& nd_sbp = SymbolOf(*JUST(op->NdSbp4BnInOp(ibn)));
                                   ^
/home/twice/project/oneflow/oneflow/core/common/maybe.h:277:40: note: expanded from macro 'JUST'
      auto* stack_frame = maybe.error()->add_stack_frame();       \
                                       ^

@@ -217,8 +217,8 @@ Maybe<void> CheckIsDeviceSupportedByOp(const ParallelDesc& parallel_desc,
// The inferred results can be retrieved by op->NdSbp4BnInOp(obn).
JUST(op->InferNdSbpSignatureIf(nd_sbp_constraints, *parallel_desc, NdSbpInferHint4Ibn));
}
auto* result =
new ConsistentTensorInferResult(user_op_expr.input_size(), user_op_expr.output_size());
auto result = std::make_unique<ConsistentTensorInferResult>(user_op_expr.input_size(),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这里可以直接 make_shared 吗,然后 243 行直接返回

Copy link
Contributor Author

@PragmaTwice PragmaTwice Aug 18, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

没啥必要,这里如果 make_shared 的话只能 std::make_shared<ConsistentTensorInferResult>(后面要调用 non-const 成员函数),但是返回值要求 std::shared_ptr<const ConsistentTensorInferResult>,还是需要转换类型,shared_ptr 本身开销比 unique_ptr 大。

@oneflow-ci-bot oneflow-ci-bot self-requested a review August 18, 2021 07:56
@oneflow-ci-bot oneflow-ci-bot requested review from oneflow-ci-bot and removed request for oneflow-ci-bot August 18, 2021 08:59
@oneflow-ci-bot oneflow-ci-bot requested review from oneflow-ci-bot and removed request for oneflow-ci-bot August 18, 2021 10:17
@oneflow-ci-bot oneflow-ci-bot requested review from oneflow-ci-bot and removed request for oneflow-ci-bot August 18, 2021 11:29
@oneflow-ci-bot oneflow-ci-bot self-requested a review August 18, 2021 12:30
@oneflow-ci-bot oneflow-ci-bot merged commit 192410f into Oneflow-Inc:master Aug 18, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants