Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support algebraic operations for BaseNode #358

Draft
wants to merge 5 commits into
base: master
from
Draft

Conversation

@jziub
Copy link
Contributor

jziub commented Oct 31, 2019

This is to fix #292 .

@googlebot
Copy link

googlebot commented Oct 31, 2019

We found a Contributor License Agreement for you (the sender of this pull request), but were unable to find agreements for all the commit author(s) or Co-authors. If you authored these, maybe you used a different email address in the git commits than was used to sign the CLA (login here to double check)? If these were authored by someone else, then they will need to sign a CLA as well, and confirm that they're okay with these being contributed to Google.
In order to pass this check, please resolve this problem and then comment @googlebot I fixed it.. If the bot doesn't comment, it means it doesn't think anything has changed.

ℹ️ Googlers: Go here for more info.

@googlebot googlebot added the cla: no label Oct 31, 2019
@jziub jziub force-pushed the jziub:alg-ops branch from bbff216 to 0ee2a3b Oct 31, 2019
@googlebot
Copy link

googlebot commented Oct 31, 2019

CLAs look good, thanks!

ℹ️ Googlers: Go here for more info.

@googlebot googlebot added cla: yes and removed cla: no labels Oct 31, 2019
@jziub
Copy link
Contributor Author

jziub commented Oct 31, 2019

This is an implementation of addition in numpy backend. Will add other operations if this is the right direction.

For the returned node, does it need to be a copy of the left hand node? Or just a new node with the dangling edges?

@Thenerdstation
Copy link
Collaborator

Thenerdstation commented Oct 31, 2019

Hm, that's a good question. Perhaps it would only make sense to support modifying the tensor in place, so we would only support a += b type operations.

@mganahl what are your thoughts?

@mganahl
Copy link
Collaborator

mganahl commented Nov 1, 2019

I think typically one would expect such an operation to return a new object, so I would say __add__ should return a new Node with new edges. We can still have __iadd__ as well, in which case everything except Node.tensor stays the same as it was.

@jziub
Copy link
Contributor Author

jziub commented Nov 1, 2019

Thanks for clarifying. Will return a brand new node then.

@Thenerdstation
Copy link
Collaborator

Thenerdstation commented Nov 3, 2019

Thanks for the PR!

Other projects will take my priority right now, so I will review this in ~2 weeks.

@Thenerdstation
Copy link
Collaborator

Thenerdstation commented Nov 3, 2019

Thanks for the PR!

Other projects will take priority for me on the short term, so I will review this ~2 weeks.

@googlebot
Copy link

googlebot commented Nov 4, 2019

We found a Contributor License Agreement for you (the sender of this pull request), but were unable to find agreements for all the commit author(s) or Co-authors. If you authored these, maybe you used a different email address in the git commits than was used to sign the CLA (login here to double check)? If these were authored by someone else, then they will need to sign a CLA as well, and confirm that they're okay with these being contributed to Google.
In order to pass this check, please resolve this problem and then comment @googlebot I fixed it.. If the bot doesn't comment, it means it doesn't think anything has changed.

ℹ️ Googlers: Go here for more info.

@googlebot googlebot added cla: no and removed cla: yes labels Nov 4, 2019
@jziub jziub force-pushed the jziub:alg-ops branch from c3f2335 to 227fd29 Nov 4, 2019
@googlebot
Copy link

googlebot commented Nov 4, 2019

CLAs look good, thanks!

ℹ️ Googlers: Go here for more info.

@googlebot googlebot added cla: yes and removed cla: no labels Nov 4, 2019
@jziub
Copy link
Contributor Author

jziub commented Nov 4, 2019

No problem. Have the addition and subtraction implemented across backends.

a = np.ones(tensor1.shape)
b = np.ones(tensor2.shape)
return ShellTensor((a + b).shape)

def sub(self, tensor1: Tensor, tensor2: Tensor) -> Tensor:
a = np.ones(tensor1.shape)
b = np.ones(tensor2.shape)
Comment on lines 260 to 266

This comment has been minimized.

@Thenerdstation

Thenerdstation Nov 12, 2019 Collaborator

Not sure I agree with how this is implemented. This will blow up if the shapes are huge (which would be the main reason to use the shell backend in the first place).

This comment has been minimized.

@mganahl

mganahl Nov 12, 2019 Collaborator

I agree, just check if the shapes the tensors match and then create a ShellTensor directly

This comment has been minimized.

@jziub

jziub Nov 13, 2019 Author Contributor

Changed as suggested. Added another unit test to check mismatch shape. Please take another look.

@@ -83,7 +83,10 @@ def diag(self, tensor: Tensor) -> Tensor:
return self.tf.linalg.diag(tensor)

def convert_to_tensor(self, tensor: Tensor) -> Tensor:
result = self.tf.convert_to_tensor(tensor)
if isinstance(tensor, (float, int)):

This comment has been minimized.

@mganahl

mganahl Nov 20, 2019 Collaborator

what is this case covering?

This comment has been minimized.

@jziub

jziub Nov 22, 2019 Author Contributor

Given a scalar, we need to specify the data type to create a tensor in tensorflow.

This comment has been minimized.

@mganahl

mganahl Nov 22, 2019 Collaborator

Just worried that explicit use of tf.float64 could lead to problems if other dtypes are e.g. tf.float32

This comment has been minimized.

@jziub

jziub Nov 22, 2019 Author Contributor

hmm, the input is constrained to float and int in python type, could that protect tf.float64?

This comment has been minimized.

@mganahl

mganahl Nov 22, 2019 Collaborator

Can you tell me why you added this?

This comment has been minimized.

@jziub

jziub Nov 22, 2019 Author Contributor

self.tf.convert_to_tensor(tensor) throws an error if the input is float/int.

This comment has been minimized.

@mganahl

mganahl Nov 22, 2019 Collaborator

OK, but where in the code do you need to pass a float or int to backed.convert_to_tensor?

This comment has been minimized.

@jziub

jziub Nov 22, 2019 Author Contributor

Following functions call backed.convert_to_tensor. Test case is like np.ones((1, 2, 3)) + 2

def sub()
def add()

raise AttributeError("Please provide a valid tensor for this Node.")

if isinstance(other, BaseNode):
other_tensor = other.get_tensor()

This comment has been minimized.

@mganahl

mganahl Nov 20, 2019 Collaborator

I thought we had deprecated get_tensor()?

This comment has been minimized.

@jziub

jziub Nov 22, 2019 Author Contributor

Replaced with tensor.

@googlebot
Copy link

googlebot commented Nov 22, 2019

We found a Contributor License Agreement for you (the sender of this pull request), but were unable to find agreements for all the commit author(s) or Co-authors. If you authored these, maybe you used a different email address in the git commits than was used to sign the CLA (login here to double check)? If these were authored by someone else, then they will need to sign a CLA as well, and confirm that they're okay with these being contributed to Google.
In order to pass this check, please resolve this problem and then comment @googlebot I fixed it.. If the bot doesn't comment, it means it doesn't think anything has changed.

ℹ️ Googlers: Go here for more info.

@googlebot googlebot added cla: no and removed cla: yes labels Nov 22, 2019
@jziub jziub force-pushed the jziub:alg-ops branch from 759a895 to 992573a Nov 22, 2019
@googlebot
Copy link

googlebot commented Nov 22, 2019

CLAs look good, thanks!

ℹ️ Googlers: Go here for more info.

@googlebot googlebot added cla: yes and removed cla: no labels Nov 22, 2019
@@ -257,3 +257,33 @@ def test_multiply(a, b, expected):
tensor2 = backend.convert_to_tensor(b)

np.testing.assert_allclose(backend.multiply(tensor1, tensor2), expected)


@pytest.mark.parametrize("a, b, expected",

This comment has been minimized.

@mganahl

mganahl Nov 22, 2019 Collaborator

please test this for all supported dtypes

This comment has been minimized.

@jziub

jziub Nov 22, 2019 Author Contributor

Add test cases(a + b, b could be vector, float, or . int). Please take another look.

@@ -537,3 +537,33 @@ def test_remove_node(backend):
broken_edges_by_name, broken_edges_by_axis = tn.remove_node(b)
assert broken_edges_by_name == {"0": a[0]}
assert broken_edges_by_axis == {0: a[0]}


@pytest.mark.parametrize("a, b, expected",

This comment has been minimized.

@mganahl

mganahl Nov 22, 2019 Collaborator

please parametrize tests over different dtypes

This comment has been minimized.

@mganahl

mganahl Nov 22, 2019 Collaborator

also check that dtypes are preserved (no implicit conversions)

This comment has been minimized.

@jziub

jziub Nov 22, 2019 Author Contributor

Add test cases(a + b, b could be vector, float, or . int). Please take another look.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Linked issues

Successfully merging this pull request may close these issues.

4 participants
You can’t perform that action at this time.