Join GitHub today
GitHub is home to over 50 million developers working together to host and review code, manage projects, and build software together.
Sign upupdate shortest_path to TF2 #119
Conversation
googlebot
commented
May 29, 2020
|
Thanks for your pull request. It looks like this may be your first contribution to a Google open source project (if not, look below for help). Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA). Once you've signed (or fixed any issues), please reply here with What to do if you already signed the CLAIndividual signers
Corporate signers
|
|
@googlebot I signed it! |
googlebot
commented
May 29, 2020
|
CLAs look good, thanks! |
|
Thanks for the PR and the initiative @maguileracanon ! To make the review easier, do you think it would be possible to:
Thanks in advance :) |
|
Thanks, do you think it would be possible to rewrite the commit history to have just those two CLs (Similarly to how it was done here ). Currently, the diff of the "update to tf2" does not pick up on the specific changes that were made (ideally the changes would be able to highlight the tf2 difference specifically). If possible it would look more similar to this diff. I know github support for diffs in colab notebooks is not great, and maybe the changes are large enough that the diff cannot figure out the shared parts. But otherwise, it is hard to see what |
|
Hi ! yes. Sorry, the revert commit was because, as you mention, I noticed that the diff for the first 2 commits wasn't picking up the difference between them so I tried to revert it and see if I can find another solution. I still don't really understand why is taking the diff as if I had deleted everything and placed it again. I'll post a comment when I solve it. |
|
Ok, the problem was that I was deleting the output at the end and that was kind of re-writting the entire notebook. If you look at the last commit e82bd5b then the diff should be clear for review. I hope this is what you need, but let me know if there is something else you need. |
|
That looks much better! It could be great if you could get rid of the earlier commits so it shows a single first commit with the full copy (e.g. a large diff), and then the second commit with a diff similar to what you have now in the last commit. Otherwise, it is hard to track all of the changes in the earlier commits. One option could be to squash all of the commits (except the last one). This should result in a single commit that corresponds purely to the file move. Alternatively, it may be easier to start again, by:
|
e82bd5b
to
09e6b69
|
ok. The commit history looks a little bit tidier now :) |
|
Thanks for your PR great work! Please fine below some comments. I think we should actually be able to achieve the same with quite a few line changes, to keep it closer to the tf1 version, and to the other tf2 version. Something else that I noticed, is that when I run this demo agains the old tf2 version in colaboratory It may be worth investigating, although maybe it is just different default initialization settings or something like that. |
| @@ -45,7 +45,7 @@ | |||
| }, | |||
| { | |||
| "cell_type": "code", | |||
| "execution_count": 0, | |||
| "execution_count": null, | |||
| "metadata": { | |||
| "cellView": "form", | |||
| "colab": {}, | |||
This comment has been minimized.
This comment has been minimized.
alvarosg
Jun 23, 2020
Collaborator
(can'c comment below, but these two should be changed):
!pip install graph_nets "dm-sonnet<2" "tensorflow_probability<0.9"
to
!pip install "graph_nets>=1.1" "dm-sonnet>=2.0.0b0"
and
pip install graph_nets matplotlib scipy "tensorflow>=1.15,<2" "dm-sonnet<2" "tensorflow_probability<0.9"
to
pip install "graph_nets>=1.1" matplotlib scipy "tensorflow>=2.1.0-rc1" "dm-sonnet>=2.0.0b0" tensorflow_probability
| @@ -2,7 +2,7 @@ | |||
| "cells": [ | |||
This comment has been minimized.
This comment has been minimized.
alvarosg
Jun 23, 2020
Collaborator
Please make sure this runs in Google Colaboratory, and add the relevant TF2 version to the README (analogous to how it was done for the the sort demo).
| " input_graphs = networkxs_to_tf_graphs_tuple(inputs)\n", | ||
| " target_graphs = networkxs_to_tf_graphs_tuple(targets)\n", | ||
| " \n", | ||
| " return input_graphs,target_graphs, raw_graphs\n", |
This comment has been minimized.
This comment has been minimized.
alvarosg
Jun 23, 2020
Collaborator
Leave space after commas, it would be great if you could run a linter on this (but make sure it leaves the two space indentation).
| "\n", | ||
| "\n", | ||
| "def create_feed_dict(rand, batch_size, num_nodes_min_max, theta, input_ph,\n", | ||
| " target_ph):\n", | ||
| "def create_tf_GraphTouple(rand, batch_size, num_nodes_min_max, theta):\n", |
This comment has been minimized.
This comment has been minimized.
| "def create_placeholders(rand, batch_size, num_nodes_min_max, theta):\n", | ||
| " \"\"\"Creates placeholders for the model training and evaluation.\n", | ||
| "def networkxs_to_tf_graphs_tuple(graph_nxs,\n", | ||
| " node_shape_hint=None,\n", |
This comment has been minimized.
This comment has been minimized.
alvarosg
Jun 23, 2020
Collaborator
Make sure arguments are aligned, and leave exactly two empty lines between method definitions :)
| " with tf.GradientTape() as tape:\n", | ||
| " output_ops_tr = model(inputs_tr, num_processing_steps_tr)\n", | ||
| " # Loss.\n", | ||
| " loss_tr = create_loss_ops(targets_tr,output_ops_tr)\n", |
This comment has been minimized.
This comment has been minimized.
| "\n", | ||
| "# Get the input signature for that function by obtaining the specs\n", | ||
| "input_signature = [\n", | ||
| " utils_tf.specs_from_graphs_tuple(input_ph),\n", |
This comment has been minimized.
This comment has been minimized.
alvarosg
Jun 23, 2020
Collaborator
it should be 4 spaces in this case, as this is not a nested context, but a line continuation.
| @@ -802,36 +837,23 @@ | |||
| "\n", | |||
| "# Data.\n", | |||
| "# Input and target placeholders.\n", | |||
| "input_ph, target_ph = create_placeholders(rand, batch_size_tr,\n", | |||
| "input_ph, target_ph, _ = create_tf_GraphTouple(rand, batch_size_tr,\n", | |||
This comment has been minimized.
This comment has been minimized.
| " with tf.GradientTape() as tape:\n", | ||
| " output_ops_tr = model(inputs_tr, num_processing_steps_tr)\n", | ||
| " # Loss.\n", | ||
| " loss_tr = create_loss_ops(targets_tr,output_ops_tr)\n", |
This comment has been minimized.
This comment has been minimized.
| @@ -998,12 +1034,12 @@ | |||
| "node_size = 120\n", | |||
| "min_c = 0.3\n", | |||
| "num_graphs = len(raw_graphs)\n", | |||
| "targets = utils_np.graphs_tuple_to_data_dicts(test_values[\"target\"])\n", | |||
| "targets = graphs_touple_to_graph_list(targets_ge)\n", | |||
This comment has been minimized.
This comment has been minimized.
alvarosg
Jun 23, 2020
Collaborator
I have checked and this one, and the one below just works with:
utils_np.graphs_tuple_to_data_dicts
so you can remove the graphs_touple_to_graph_list method.
(You will need to also revert the changes in line 1017 and line 1043)


maguileracanon commentedMay 29, 2020
•
edited
Hello, graph-nets Authors!!
Following the discussion in #115 I wanted to contribute and adapted the shortest_path example to run in TF2 and implementing the sonnet Adam optimizer. I added a couple of functions to make it runnable. Ideally, they should probably live inside utils_np or utils_tf but I kept them here for simplicity.
I hope it is useful!