Skip to content

Conversation

@reyoung
Copy link
Collaborator

@reyoung reyoung commented Jan 19, 2018

Users can use a+b, a*10.

Users can use `a+b`, `a*10`.
@reyoung reyoung requested a review from JiayiFeng January 19, 2018 06:41
@tonyyang-svail
Copy link

tonyyang-svail commented Jan 19, 2018

will this pr handle b = -a?


def monkey_patch_variable():
def new_name():
return unique_name("tmp")
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Any more meaningful name?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

tmp_name = new_name()
var = block.create_var(name=tmp_name, shape=shape, dtype=dtype)
block.append_op(
type="fill_constant",
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do we use fill_constant_op in create_tensor while fill_op in create_scalar?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cool


def astype(self, dtype):
"""
Cast a variable to data type.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cast a variable to a specified data type.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cool, thanks.

# add fill_op to self.block
other_var = create_scalar(
self.block, value=other_var, dtype=lhs_dtype)

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe we shall add a type check here to make sure the rhs have been cast to Variable correctly. For some users may send np.array to the other_var.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The create_scalar and create_tensor have checked the value can be cast to float.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK, I see.

else:
other_var = create_tensor_with_batchsize(
self, other_var, lhs_dtype)
else:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why does the other_var is cast to Variable by different methods in the two branches of if reverse:?

Copy link
Collaborator Author

@reyoung reyoung Jan 22, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The reverse operator in Python means the left-hand operand is a Python value and the right-hand operand is a Variable.

In the elementwise-operators, the right-hand operand can be broadcasted. Supposed we write an a - 10, every element in a will be minus 10. However, for 10 - a, we do not implement the broadcast in C++ operators. So we should explicitly broadcast 10 to the same shape of a.

Getting the shape of a variable has two situations.

  1. The variable is a parameter. Its shape is decided at compile time and not related to the batch size. So we use fill_constant.
  2. The variable is a layer output. Its shape is decided at runtime and related to the batch size. So we use fill_constant_batch_size_like.

Copy link
Collaborator

@JiayiFeng JiayiFeng Jan 22, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great idea! Impressive.

@reyoung
Copy link
Collaborator Author

reyoung commented Jan 22, 2018

will this pr handle b = -a?

No, it won't. But following this PR, we can overload all operators in Python. We can give an API just like numpy.

Copy link
Collaborator

@JiayiFeng JiayiFeng left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@reyoung reyoung merged commit f45b0b0 into PaddlePaddle:develop Jan 22, 2018
@emailweixu emailweixu mentioned this pull request Feb 9, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants