Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Minor Documentation fixes #301

Merged
merged 2 commits into from Sep 14, 2020
Merged

Minor Documentation fixes #301

merged 2 commits into from Sep 14, 2020

Conversation

@benfred
Copy link
Collaborator

@benfred benfred commented Sep 14, 2020

  • HowItWorks.md had code samples that no longer worked with latest NVT
  • Rossman  Finalize Columns - Mentioning nvtabular.torch_dataloader which no longer exists
  • Rossman Tensorflow: Defining a Model - link 404 [TabularModel]
  • Under Criteo - Initializing the Memory Pool, “a good best practices” → a best practice
benfred added 2 commits Sep 14, 2020
* HowItWorks.md had code samples that no longer worked with latest NVT
* Rossman  Finalize Columns - Mentioning nvtabular.torch_dataloader which no longer exists
* Rossman Tensorflow: Defining a Model - link 404 [TabularModel]
* Under Criteo - Initializing the Memory Pool, “a good best practices” → a best practice
@nvidia-merlin-bot
Copy link
Collaborator

@nvidia-merlin-bot nvidia-merlin-bot commented Sep 14, 2020

Click to view CI Results
GitHub pull request #301 of commit 106030addcd32d13674f9dfed296abc84cd8f9ea, no merge conflicts.
Running as SYSTEM
Setting status of 106030addcd32d13674f9dfed296abc84cd8f9ea to PENDING with url http://10.20.13.93:8080/job/nvtabular_tests/878/ and message: 'Pending'
Using context: Jenkins Unit Test Run
Building in workspace /var/jenkins_home/workspace/nvtabular_tests
using credential nvidia-merlin-bot
Cloning the remote Git repository
Cloning repository https://github.com/NVIDIA/NVTabular.git
 > git init /var/jenkins_home/workspace/nvtabular_tests/nvtabular # timeout=10
Fetching upstream changes from https://github.com/NVIDIA/NVTabular.git
 > git --version # timeout=10
using GIT_ASKPASS to set credentials This is the bot credentials for our CI/CD
 > git fetch --tags --progress -- https://github.com/NVIDIA/NVTabular.git +refs/heads/*:refs/remotes/origin/* # timeout=10
 > git config remote.origin.url https://github.com/NVIDIA/NVTabular.git # timeout=10
 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10
 > git config remote.origin.url https://github.com/NVIDIA/NVTabular.git # timeout=10
Fetching upstream changes from https://github.com/NVIDIA/NVTabular.git
using GIT_ASKPASS to set credentials This is the bot credentials for our CI/CD
 > git fetch --tags --progress -- https://github.com/NVIDIA/NVTabular.git +refs/pull/301/*:refs/remotes/origin/pr/301/* # timeout=10
 > git rev-parse 106030addcd32d13674f9dfed296abc84cd8f9ea^{commit} # timeout=10
Checking out Revision 106030addcd32d13674f9dfed296abc84cd8f9ea (detached)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 106030addcd32d13674f9dfed296abc84cd8f9ea # timeout=10
Commit message: "Minor Documentation fixes"
 > git rev-list --no-walk 22e8a34ad71b56abbdca10353614ca905685f8c2 # timeout=10
First time build. Skipping changelog.
[nvtabular_tests] $ /bin/bash /tmp/jenkins7448838117181099238.sh
Obtaining file:///var/jenkins_home/workspace/nvtabular_tests/nvtabular
  Installing build dependencies: started
  Installing build dependencies: finished with status 'done'
  Getting requirements to build wheel: started
  Getting requirements to build wheel: finished with status 'done'
    Preparing wheel metadata: started
    Preparing wheel metadata: finished with status 'done'
Installing collected packages: nvtabular
  Running setup.py develop for nvtabular
Successfully installed nvtabular
All done! ✨ 🍰 ✨
71 files would be left unchanged.
/var/jenkins_home/.local/lib/python3.7/site-packages/isort/main.py:125: UserWarning: Likely recursive symlink detected to /var/jenkins_home/workspace/nvtabular_tests/nvtabular/images
  warn(f"Likely recursive symlink detected to {resolved_path}")
Skipped 1 files
============================= test session starts ==============================
platform linux -- Python 3.7.8, pytest-6.0.1, py-1.9.0, pluggy-0.13.1
benchmark: 3.2.3 (defaults: timer=time.perf_counter disable_gc=False min_rounds=5 min_time=0.000005 max_time=1.0 calibration_precision=10 warmup=False warmup_iterations=100000)
rootdir: /var/jenkins_home/workspace/nvtabular_tests/nvtabular, configfile: setup.cfg
plugins: benchmark-3.2.3, hypothesis-5.28.0, asyncio-0.12.0, timeout-1.4.2, cov-2.10.1, forked-1.3.0, xdist-2.1.0
collected 512 items

tests/unit/test_column_similarity.py ...... [ 1%]
tests/unit/test_dask_nvt.py ............................................ [ 9%]
.......... [ 11%]
tests/unit/test_io.py .................................................. [ 21%]
............................ [ 26%]
tests/unit/test_notebooks.py .... [ 27%]
tests/unit/test_ops.py ................................................. [ 37%]
........................................................................ [ 51%]
........................................ [ 59%]
tests/unit/test_s3.py .. [ 59%]
tests/unit/test_tf_dataloader.py ............ [ 61%]
tests/unit/test_tf_layers.py ........................................... [ 70%]
................................ [ 76%]
tests/unit/test_torch_dataloader.py ..................... [ 80%]
tests/unit/test_workflow.py ............................................ [ 89%]
....................................................... [100%]

=============================== warnings summary ===============================
/opt/conda/envs/rapids/lib/python3.7/site-packages/pandas/util/init.py:12
/opt/conda/envs/rapids/lib/python3.7/site-packages/pandas/util/init.py:12: FutureWarning: pandas.util.testing is deprecated. Use the functions in the public API at pandas.testing instead.
import pandas.util.testing

tests/unit/test_io.py::test_mulifile_parquet[True-0-0-csv]
tests/unit/test_io.py::test_mulifile_parquet[True-0-2-csv]
tests/unit/test_io.py::test_mulifile_parquet[True-1-0-csv]
tests/unit/test_io.py::test_mulifile_parquet[True-1-2-csv]
tests/unit/test_io.py::test_mulifile_parquet[True-2-0-csv]
tests/unit/test_io.py::test_mulifile_parquet[True-2-2-csv]
/var/jenkins_home/workspace/nvtabular_tests/nvtabular/nvtabular/io/shuffle.py:42: DeprecationWarning: shuffle=True is deprecated. Using PER_WORKER.
warnings.warn("shuffle=True is deprecated. Using PER_WORKER.", DeprecationWarning)

tests/unit/test_notebooks.py::test_multigpu_dask_example
/opt/conda/envs/rapids/lib/python3.7/site-packages/distributed/node.py:155: UserWarning: Port 8787 is already in use.
Perhaps you already have a cluster running?
Hosting the HTTP server on port 35071 instead
http_address["port"], self.http_server.port

tests/unit/test_tf_layers.py: 130 warnings
/var/jenkins_home/.local/lib/python3.7/site-packages/tensorflow_core/python/framework/tensor_util.py:523: DeprecationWarning: tostring() is deprecated. Use tobytes() instead.
tensor_proto.tensor_content = nparray.tostring()

tests/unit/test_tf_layers.py::test_dense_embedding_layer[stack]
/var/jenkins_home/.local/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training_v2_utils.py:544: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated since Python 3.3,and in 3.9 it will stop working
if isinstance(inputs, collections.Sequence):

tests/unit/test_torch_dataloader.py::test_empty_cols[parquet]
tests/unit/test_torch_dataloader.py::test_empty_cols[parquet]
tests/unit/test_torch_dataloader.py::test_empty_cols[parquet]
tests/unit/test_torch_dataloader.py::test_empty_cols[parquet]
/opt/conda/envs/rapids/lib/python3.7/site-packages/cudf/core/dataframe.py:660: DeprecationWarning: The default dtype for empty Series will be 'object' instead of 'float64' in a future version. Specify a dtype explicitly to silence this warning.
mask = pd.Series(mask)

tests/unit/test_torch_dataloader.py::test_gpu_dl[None-parquet-1-1e-06]
/var/jenkins_home/workspace/nvtabular_tests/nvtabular/nvtabular/io/parquet.py:75: UserWarning: Row group size 28392 is bigger than requested part_size 17069
f"Row group size {rg_byte_size_0} is bigger than requested part_size "

tests/unit/test_torch_dataloader.py::test_gpu_dl[None-parquet-10-1e-06]
/var/jenkins_home/workspace/nvtabular_tests/nvtabular/nvtabular/io/parquet.py:75: UserWarning: Row group size 30520 is bigger than requested part_size 17069
f"Row group size {rg_byte_size_0} is bigger than requested part_size "

tests/unit/test_torch_dataloader.py::test_gpu_dl[None-parquet-100-1e-06]
/var/jenkins_home/workspace/nvtabular_tests/nvtabular/nvtabular/io/parquet.py:75: UserWarning: Row group size 30912 is bigger than requested part_size 17069
f"Row group size {rg_byte_size_0} is bigger than requested part_size "

tests/unit/test_torch_dataloader.py::test_gpu_dl[devices1-parquet-1-1e-06]
/var/jenkins_home/workspace/nvtabular_tests/nvtabular/nvtabular/io/parquet.py:75: UserWarning: Row group size 29204 is bigger than requested part_size 17069
f"Row group size {rg_byte_size_0} is bigger than requested part_size "

tests/unit/test_torch_dataloader.py::test_gpu_dl[devices1-parquet-10-1e-06]
/var/jenkins_home/workspace/nvtabular_tests/nvtabular/nvtabular/io/parquet.py:75: UserWarning: Row group size 29344 is bigger than requested part_size 17069
f"Row group size {rg_byte_size_0} is bigger than requested part_size "

tests/unit/test_torch_dataloader.py::test_gpu_dl[devices1-parquet-100-1e-06]
/var/jenkins_home/workspace/nvtabular_tests/nvtabular/nvtabular/io/parquet.py:75: UserWarning: Row group size 30240 is bigger than requested part_size 17069
f"Row group size {rg_byte_size_0} is bigger than requested part_size "

tests/unit/test_torch_dataloader.py::test_kill_dl[parquet-1e-06]
/var/jenkins_home/workspace/nvtabular_tests/nvtabular/nvtabular/io/parquet.py:75: UserWarning: Row group size 60480 is bigger than requested part_size 17069
f"Row group size {rg_byte_size_0} is bigger than requested part_size "

tests/unit/test_workflow.py::test_chaining_3
/var/jenkins_home/workspace/nvtabular_tests/nvtabular/nvtabular/io/dataset.py:193: UserWarning: part_mem_fraction is ignored for DataFrame input.
warnings.warn("part_mem_fraction is ignored for DataFrame input.")

-- Docs: https://docs.pytest.org/en/stable/warnings.html

----------- coverage: platform linux, python 3.7.8-final-0 -----------
Name Stmts Miss Branch BrPart Cover Missing

nvtabular/init.py 8 0 0 0 100%
nvtabular/framework_utils/init.py 0 0 0 0 100%
nvtabular/framework_utils/tensorflow/init.py 0 0 0 0 100%
nvtabular/framework_utils/tensorflow/layers/init.py 3 0 0 0 100%
nvtabular/framework_utils/tensorflow/layers/embedding.py 134 12 81 5 87% 27->28, 28, 51->60, 60, 68->49, 190-198, 201, 294->302, 315->318, 321-322, 325
nvtabular/framework_utils/tensorflow/layers/interaction.py 47 2 20 1 96% 47->48, 48, 112
nvtabular/framework_utils/torch/init.py 0 0 0 0 100%
nvtabular/framework_utils/torch/layers/init.py 2 0 0 0 100%
nvtabular/framework_utils/torch/layers/embeddings.py 11 0 4 0 100%
nvtabular/framework_utils/torch/models.py 24 0 8 1 97% 80->82
nvtabular/framework_utils/torch/utils.py 18 0 4 0 100%
nvtabular/io/init.py 4 0 0 0 100%
nvtabular/io/csv.py 14 1 4 1 89% 35->36, 36
nvtabular/io/dask.py 80 3 32 6 92% 154->157, 164->165, 165, 169->171, 171->167, 175->176, 176, 177->178, 178
nvtabular/io/dataframe_engine.py 12 2 4 1 81% 31->32, 32, 37
nvtabular/io/dataset.py 99 9 46 8 88% 190->191, 191, 203->204, 204, 212->213, 213, 221->233, 226->231, 231-233, 308->309, 309, 323->324, 324-325, 343->344, 344
nvtabular/io/dataset_engine.py 12 0 0 0 100%
nvtabular/io/hugectr.py 42 1 18 1 97% 64->87, 91
nvtabular/io/parquet.py 153 1 50 3 98% 139->140, 140, 235->237, 243->248
nvtabular/io/shuffle.py 25 2 10 2 89% 38->39, 39, 43->46, 46
nvtabular/io/writer.py 119 9 42 2 92% 29, 46, 70->71, 71, 109, 112, 173->174, 174, 195-197
nvtabular/io/writer_factory.py 16 2 6 2 82% 31->32, 32, 49->52, 52
nvtabular/loader/init.py 0 0 0 0 100%
nvtabular/loader/backend.py 188 8 60 5 95% 69->70, 70, 133->134, 134, 144-145, 156, 231->233, 246->247, 247, 269->270, 270-271
nvtabular/loader/tensorflow.py 109 16 46 10 82% 39->40, 40-41, 51->52, 52, 59->60, 60-63, 72->73, 73, 78->83, 83, 244-253, 268->269, 269, 288->289, 289, 296->297, 297, 298->301, 301, 306->307, 307
nvtabular/loader/tf_utils.py 51 7 20 5 83% 29->32, 32->34, 39->41, 42->43, 43, 50-51, 56->64, 59-64
nvtabular/loader/torch.py 33 0 4 0 100%
nvtabular/ops/init.py 20 0 0 0 100%
nvtabular/ops/categorify.py 365 54 192 38 82% 155->156, 156, 164->169, 169, 179->180, 180, 224->225, 225, 268->269, 269, 272->278, 348->349, 349-351, 353->354, 354, 355->356, 356, 374->377, 377, 388->389, 389, 395->398, 421->422, 422-423, 425->426, 426-427, 429->430, 430-446, 448->452, 452, 456->457, 457, 458->459, 459, 466->467, 467, 468->469, 469, 475->476, 476, 485->494, 494-495, 499->500, 500, 513->514, 514, 516->519, 521->538, 538-541, 564->565, 565, 568->569, 569, 570->571, 571, 578->579, 579, 580->583, 583, 690->691, 691, 692->693, 693, 714->729, 754->759, 757->758, 758, 768->765, 773->765
nvtabular/ops/clip.py 25 3 10 4 80% 52->53, 53, 61->62, 62, 66->68, 68->69, 69
nvtabular/ops/column_similarity.py 89 21 28 4 70% 171-172, 181-183, 191-207, 222->232, 224->227, 227->228, 228, 237->238, 238
nvtabular/ops/difference_lag.py 21 1 4 1 92% 73->74, 74
nvtabular/ops/dropna.py 14 0 0 0 100%
nvtabular/ops/fill.py 36 2 10 2 91% 66->67, 67, 107->108, 108
nvtabular/ops/filter.py 17 1 2 1 89% 44->45, 45
nvtabular/ops/groupby_statistics.py 80 3 30 3 95% 146->147, 147, 151->176, 183->184, 184, 208
nvtabular/ops/hash_bucket.py 30 4 16 2 83% 96->97, 97-99, 100->103, 103
nvtabular/ops/join_external.py 66 4 26 5 90% 105->106, 106, 107->108, 108, 122->125, 125, 138->142, 178->179, 179
nvtabular/ops/join_groupby.py 56 0 18 0 100%
nvtabular/ops/lambdaop.py 24 2 8 2 88% 82->83, 83, 84->85, 85
nvtabular/ops/logop.py 17 1 4 1 90% 57->58, 58
nvtabular/ops/median.py 24 1 2 0 96% 52
nvtabular/ops/minmax.py 30 1 2 0 97% 56
nvtabular/ops/moments.py 33 1 2 0 97% 60
nvtabular/ops/normalize.py 49 4 14 4 84% 65->66, 66, 73->72, 122->123, 123, 132->134, 134-135
nvtabular/ops/operator.py 19 1 8 2 89% 43->42, 45->46, 46
nvtabular/ops/stat_operator.py 10 0 0 0 100%
nvtabular/ops/target_encoding.py 92 1 22 3 96% 144->146, 173->174, 174, 225->228
nvtabular/ops/transform_operator.py 41 6 10 2 80% 42-46, 68->69, 69-71, 88->89, 89
nvtabular/utils.py 25 5 10 5 71% 26->27, 27, 28->31, 31, 37->38, 38, 40->41, 41, 45->47, 47
nvtabular/worker.py 65 1 30 2 97% 80->92, 118->121, 121
nvtabular/workflow.py 420 38 232 24 89% 99->103, 103, 109->110, 110-114, 144->exit, 160->exit, 176->exit, 192->exit, 245->247, 295->296, 296, 375->378, 378, 403->404, 404, 410->413, 413, 476->477, 477, 495->497, 497-506, 517->516, 566->571, 571, 574->575, 575, 610->611, 611, 660->651, 726->737, 737, 759-789, 816->817, 817, 830->833, 863->864, 864-866, 870->871, 871, 904->905, 905
setup.py 2 2 0 0 0% 18-20

TOTAL 2874 232 1139 158 89%
Coverage XML written to file coverage.xml

Required test coverage of 70% reached. Total coverage: 89.09%
================ 512 passed, 151 warnings in 479.48s (0:07:59) =================
Performing Post build task...
Match found for : : True
Logical operation result is TRUE
Running script : #!/bin/bash
source activate rapids
cd /var/jenkins_home/
python test_res_push.py "https://api.GitHub.com/repos/NVIDIA/NVTabular/issues/$ghprbPullId/comments" "/var/jenkins_home/jobs/$JOB_NAME/builds/$BUILD_NUMBER/log"
[nvtabular_tests] $ /bin/bash /tmp/jenkins7439487825141316237.sh

@nvidia-merlin-bot
Copy link
Collaborator

@nvidia-merlin-bot nvidia-merlin-bot commented Sep 14, 2020

Click to view CI Results
GitHub pull request #301 of commit 7b6b3baeb89dd671733d27b4953ab2dcdf20f876, no merge conflicts.
Running as SYSTEM
Setting status of 7b6b3baeb89dd671733d27b4953ab2dcdf20f876 to PENDING with url http://10.20.13.93:8080/job/nvtabular_tests/879/ and message: 'Pending'
Using context: Jenkins Unit Test Run
Building in workspace /var/jenkins_home/workspace/nvtabular_tests
using credential nvidia-merlin-bot
Cloning the remote Git repository
Cloning repository https://github.com/NVIDIA/NVTabular.git
 > git init /var/jenkins_home/workspace/nvtabular_tests/nvtabular # timeout=10
Fetching upstream changes from https://github.com/NVIDIA/NVTabular.git
 > git --version # timeout=10
using GIT_ASKPASS to set credentials This is the bot credentials for our CI/CD
 > git fetch --tags --progress -- https://github.com/NVIDIA/NVTabular.git +refs/heads/*:refs/remotes/origin/* # timeout=10
 > git config remote.origin.url https://github.com/NVIDIA/NVTabular.git # timeout=10
 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10
 > git config remote.origin.url https://github.com/NVIDIA/NVTabular.git # timeout=10
Fetching upstream changes from https://github.com/NVIDIA/NVTabular.git
using GIT_ASKPASS to set credentials This is the bot credentials for our CI/CD
 > git fetch --tags --progress -- https://github.com/NVIDIA/NVTabular.git +refs/pull/301/*:refs/remotes/origin/pr/301/* # timeout=10
 > git rev-parse 7b6b3baeb89dd671733d27b4953ab2dcdf20f876^{commit} # timeout=10
Checking out Revision 7b6b3baeb89dd671733d27b4953ab2dcdf20f876 (detached)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 7b6b3baeb89dd671733d27b4953ab2dcdf20f876 # timeout=10
Commit message: "tweak torch loader"
 > git rev-list --no-walk 106030addcd32d13674f9dfed296abc84cd8f9ea # timeout=10
[nvtabular_tests] $ /bin/bash /tmp/jenkins7696831271088942769.sh
Obtaining file:///var/jenkins_home/workspace/nvtabular_tests/nvtabular
  Installing build dependencies: started
  Installing build dependencies: finished with status 'done'
  Getting requirements to build wheel: started
  Getting requirements to build wheel: finished with status 'done'
    Preparing wheel metadata: started
    Preparing wheel metadata: finished with status 'done'
Installing collected packages: nvtabular
  Attempting uninstall: nvtabular
    Found existing installation: nvtabular 0.2.0
    Uninstalling nvtabular-0.2.0:
      Successfully uninstalled nvtabular-0.2.0
  Running setup.py develop for nvtabular
Successfully installed nvtabular
All done! ✨ 🍰 ✨
71 files would be left unchanged.
/var/jenkins_home/.local/lib/python3.7/site-packages/isort/main.py:125: UserWarning: Likely recursive symlink detected to /var/jenkins_home/workspace/nvtabular_tests/nvtabular/images
  warn(f"Likely recursive symlink detected to {resolved_path}")
Skipped 1 files
============================= test session starts ==============================
platform linux -- Python 3.7.8, pytest-6.0.1, py-1.9.0, pluggy-0.13.1
benchmark: 3.2.3 (defaults: timer=time.perf_counter disable_gc=False min_rounds=5 min_time=0.000005 max_time=1.0 calibration_precision=10 warmup=False warmup_iterations=100000)
rootdir: /var/jenkins_home/workspace/nvtabular_tests/nvtabular, configfile: setup.cfg
plugins: benchmark-3.2.3, hypothesis-5.28.0, asyncio-0.12.0, timeout-1.4.2, cov-2.10.1, forked-1.3.0, xdist-2.1.0
collected 512 items

tests/unit/test_column_similarity.py ...... [ 1%]
tests/unit/test_dask_nvt.py ............................................ [ 9%]
.......... [ 11%]
tests/unit/test_io.py .................................................. [ 21%]
............................ [ 26%]
tests/unit/test_notebooks.py .... [ 27%]
tests/unit/test_ops.py ................................................. [ 37%]
........................................................................ [ 51%]
........................................ [ 59%]
tests/unit/test_s3.py .. [ 59%]
tests/unit/test_tf_dataloader.py ............ [ 61%]
tests/unit/test_tf_layers.py ........................................... [ 70%]
................................ [ 76%]
tests/unit/test_torch_dataloader.py ..................... [ 80%]
tests/unit/test_workflow.py ............................................ [ 89%]
....................................................... [100%]

=============================== warnings summary ===============================
/opt/conda/envs/rapids/lib/python3.7/site-packages/pandas/util/init.py:12
/opt/conda/envs/rapids/lib/python3.7/site-packages/pandas/util/init.py:12: FutureWarning: pandas.util.testing is deprecated. Use the functions in the public API at pandas.testing instead.
import pandas.util.testing

tests/unit/test_io.py::test_mulifile_parquet[True-0-0-csv]
tests/unit/test_io.py::test_mulifile_parquet[True-0-2-csv]
tests/unit/test_io.py::test_mulifile_parquet[True-1-0-csv]
tests/unit/test_io.py::test_mulifile_parquet[True-1-2-csv]
tests/unit/test_io.py::test_mulifile_parquet[True-2-0-csv]
tests/unit/test_io.py::test_mulifile_parquet[True-2-2-csv]
/var/jenkins_home/workspace/nvtabular_tests/nvtabular/nvtabular/io/shuffle.py:42: DeprecationWarning: shuffle=True is deprecated. Using PER_WORKER.
warnings.warn("shuffle=True is deprecated. Using PER_WORKER.", DeprecationWarning)

tests/unit/test_notebooks.py::test_multigpu_dask_example
/opt/conda/envs/rapids/lib/python3.7/site-packages/distributed/node.py:155: UserWarning: Port 8787 is already in use.
Perhaps you already have a cluster running?
Hosting the HTTP server on port 34037 instead
http_address["port"], self.http_server.port

tests/unit/test_tf_layers.py: 130 warnings
/var/jenkins_home/.local/lib/python3.7/site-packages/tensorflow_core/python/framework/tensor_util.py:523: DeprecationWarning: tostring() is deprecated. Use tobytes() instead.
tensor_proto.tensor_content = nparray.tostring()

tests/unit/test_tf_layers.py::test_dense_embedding_layer[stack]
/var/jenkins_home/.local/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training_v2_utils.py:544: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated since Python 3.3,and in 3.9 it will stop working
if isinstance(inputs, collections.Sequence):

tests/unit/test_torch_dataloader.py::test_empty_cols[parquet]
tests/unit/test_torch_dataloader.py::test_empty_cols[parquet]
tests/unit/test_torch_dataloader.py::test_empty_cols[parquet]
tests/unit/test_torch_dataloader.py::test_empty_cols[parquet]
/opt/conda/envs/rapids/lib/python3.7/site-packages/cudf/core/dataframe.py:660: DeprecationWarning: The default dtype for empty Series will be 'object' instead of 'float64' in a future version. Specify a dtype explicitly to silence this warning.
mask = pd.Series(mask)

tests/unit/test_torch_dataloader.py::test_gpu_dl[None-parquet-1-1e-06]
/var/jenkins_home/workspace/nvtabular_tests/nvtabular/nvtabular/io/parquet.py:75: UserWarning: Row group size 28392 is bigger than requested part_size 17069
f"Row group size {rg_byte_size_0} is bigger than requested part_size "

tests/unit/test_torch_dataloader.py::test_gpu_dl[None-parquet-10-1e-06]
/var/jenkins_home/workspace/nvtabular_tests/nvtabular/nvtabular/io/parquet.py:75: UserWarning: Row group size 30520 is bigger than requested part_size 17069
f"Row group size {rg_byte_size_0} is bigger than requested part_size "

tests/unit/test_torch_dataloader.py::test_gpu_dl[None-parquet-100-1e-06]
/var/jenkins_home/workspace/nvtabular_tests/nvtabular/nvtabular/io/parquet.py:75: UserWarning: Row group size 30912 is bigger than requested part_size 17069
f"Row group size {rg_byte_size_0} is bigger than requested part_size "

tests/unit/test_torch_dataloader.py::test_gpu_dl[devices1-parquet-1-1e-06]
/var/jenkins_home/workspace/nvtabular_tests/nvtabular/nvtabular/io/parquet.py:75: UserWarning: Row group size 31276 is bigger than requested part_size 17069
f"Row group size {rg_byte_size_0} is bigger than requested part_size "

tests/unit/test_torch_dataloader.py::test_gpu_dl[devices1-parquet-10-1e-06]
/var/jenkins_home/workspace/nvtabular_tests/nvtabular/nvtabular/io/parquet.py:75: UserWarning: Row group size 29344 is bigger than requested part_size 17069
f"Row group size {rg_byte_size_0} is bigger than requested part_size "

tests/unit/test_torch_dataloader.py::test_gpu_dl[devices1-parquet-100-1e-06]
/var/jenkins_home/workspace/nvtabular_tests/nvtabular/nvtabular/io/parquet.py:75: UserWarning: Row group size 30240 is bigger than requested part_size 17069
f"Row group size {rg_byte_size_0} is bigger than requested part_size "

tests/unit/test_torch_dataloader.py::test_kill_dl[parquet-1e-06]
/var/jenkins_home/workspace/nvtabular_tests/nvtabular/nvtabular/io/parquet.py:75: UserWarning: Row group size 60480 is bigger than requested part_size 17069
f"Row group size {rg_byte_size_0} is bigger than requested part_size "

tests/unit/test_workflow.py::test_chaining_3
/var/jenkins_home/workspace/nvtabular_tests/nvtabular/nvtabular/io/dataset.py:193: UserWarning: part_mem_fraction is ignored for DataFrame input.
warnings.warn("part_mem_fraction is ignored for DataFrame input.")

-- Docs: https://docs.pytest.org/en/stable/warnings.html

----------- coverage: platform linux, python 3.7.8-final-0 -----------
Name Stmts Miss Branch BrPart Cover Missing

nvtabular/init.py 8 0 0 0 100%
nvtabular/framework_utils/init.py 0 0 0 0 100%
nvtabular/framework_utils/tensorflow/init.py 0 0 0 0 100%
nvtabular/framework_utils/tensorflow/layers/init.py 3 0 0 0 100%
nvtabular/framework_utils/tensorflow/layers/embedding.py 134 12 81 5 87% 27->28, 28, 51->60, 60, 68->49, 190-198, 201, 294->302, 315->318, 321-322, 325
nvtabular/framework_utils/tensorflow/layers/interaction.py 47 2 20 1 96% 47->48, 48, 112
nvtabular/framework_utils/torch/init.py 0 0 0 0 100%
nvtabular/framework_utils/torch/layers/init.py 2 0 0 0 100%
nvtabular/framework_utils/torch/layers/embeddings.py 11 0 4 0 100%
nvtabular/framework_utils/torch/models.py 24 0 8 1 97% 80->82
nvtabular/framework_utils/torch/utils.py 18 0 4 0 100%
nvtabular/io/init.py 4 0 0 0 100%
nvtabular/io/csv.py 14 1 4 1 89% 35->36, 36
nvtabular/io/dask.py 80 3 32 6 92% 154->157, 164->165, 165, 169->171, 171->167, 175->176, 176, 177->178, 178
nvtabular/io/dataframe_engine.py 12 2 4 1 81% 31->32, 32, 37
nvtabular/io/dataset.py 99 9 46 8 88% 190->191, 191, 203->204, 204, 212->213, 213, 221->233, 226->231, 231-233, 308->309, 309, 323->324, 324-325, 343->344, 344
nvtabular/io/dataset_engine.py 12 0 0 0 100%
nvtabular/io/hugectr.py 42 1 18 1 97% 64->87, 91
nvtabular/io/parquet.py 153 1 50 3 98% 139->140, 140, 235->237, 243->248
nvtabular/io/shuffle.py 25 2 10 2 89% 38->39, 39, 43->46, 46
nvtabular/io/writer.py 119 9 42 2 92% 29, 46, 70->71, 71, 109, 112, 173->174, 174, 195-197
nvtabular/io/writer_factory.py 16 2 6 2 82% 31->32, 32, 49->52, 52
nvtabular/loader/init.py 0 0 0 0 100%
nvtabular/loader/backend.py 188 8 60 5 95% 69->70, 70, 133->134, 134, 144-145, 156, 231->233, 246->247, 247, 269->270, 270-271
nvtabular/loader/tensorflow.py 109 16 46 10 82% 39->40, 40-41, 51->52, 52, 59->60, 60-63, 72->73, 73, 78->83, 83, 244-253, 268->269, 269, 288->289, 289, 296->297, 297, 298->301, 301, 306->307, 307
nvtabular/loader/tf_utils.py 51 7 20 5 83% 29->32, 32->34, 39->41, 42->43, 43, 50-51, 56->64, 59-64
nvtabular/loader/torch.py 33 0 4 0 100%
nvtabular/ops/init.py 20 0 0 0 100%
nvtabular/ops/categorify.py 365 54 192 38 82% 155->156, 156, 164->169, 169, 179->180, 180, 224->225, 225, 268->269, 269, 272->278, 348->349, 349-351, 353->354, 354, 355->356, 356, 374->377, 377, 388->389, 389, 395->398, 421->422, 422-423, 425->426, 426-427, 429->430, 430-446, 448->452, 452, 456->457, 457, 458->459, 459, 466->467, 467, 468->469, 469, 475->476, 476, 485->494, 494-495, 499->500, 500, 513->514, 514, 516->519, 521->538, 538-541, 564->565, 565, 568->569, 569, 570->571, 571, 578->579, 579, 580->583, 583, 690->691, 691, 692->693, 693, 714->729, 754->759, 757->758, 758, 768->765, 773->765
nvtabular/ops/clip.py 25 3 10 4 80% 52->53, 53, 61->62, 62, 66->68, 68->69, 69
nvtabular/ops/column_similarity.py 89 21 28 4 70% 171-172, 181-183, 191-207, 222->232, 224->227, 227->228, 228, 237->238, 238
nvtabular/ops/difference_lag.py 21 1 4 1 92% 73->74, 74
nvtabular/ops/dropna.py 14 0 0 0 100%
nvtabular/ops/fill.py 36 2 10 2 91% 66->67, 67, 107->108, 108
nvtabular/ops/filter.py 17 1 2 1 89% 44->45, 45
nvtabular/ops/groupby_statistics.py 80 3 30 3 95% 146->147, 147, 151->176, 183->184, 184, 208
nvtabular/ops/hash_bucket.py 30 4 16 2 83% 96->97, 97-99, 100->103, 103
nvtabular/ops/join_external.py 66 4 26 5 90% 105->106, 106, 107->108, 108, 122->125, 125, 138->142, 178->179, 179
nvtabular/ops/join_groupby.py 56 0 18 0 100%
nvtabular/ops/lambdaop.py 24 2 8 2 88% 82->83, 83, 84->85, 85
nvtabular/ops/logop.py 17 1 4 1 90% 57->58, 58
nvtabular/ops/median.py 24 1 2 0 96% 52
nvtabular/ops/minmax.py 30 1 2 0 97% 56
nvtabular/ops/moments.py 33 1 2 0 97% 60
nvtabular/ops/normalize.py 49 4 14 4 84% 65->66, 66, 73->72, 122->123, 123, 132->134, 134-135
nvtabular/ops/operator.py 19 1 8 2 89% 43->42, 45->46, 46
nvtabular/ops/stat_operator.py 10 0 0 0 100%
nvtabular/ops/target_encoding.py 92 1 22 3 96% 144->146, 173->174, 174, 225->228
nvtabular/ops/transform_operator.py 41 6 10 2 80% 42-46, 68->69, 69-71, 88->89, 89
nvtabular/utils.py 25 5 10 5 71% 26->27, 27, 28->31, 31, 37->38, 38, 40->41, 41, 45->47, 47
nvtabular/worker.py 65 1 30 2 97% 80->92, 118->121, 121
nvtabular/workflow.py 420 38 232 24 89% 99->103, 103, 109->110, 110-114, 144->exit, 160->exit, 176->exit, 192->exit, 245->247, 295->296, 296, 375->378, 378, 403->404, 404, 410->413, 413, 476->477, 477, 495->497, 497-506, 517->516, 566->571, 571, 574->575, 575, 610->611, 611, 660->651, 726->737, 737, 759-789, 816->817, 817, 830->833, 863->864, 864-866, 870->871, 871, 904->905, 905
setup.py 2 2 0 0 0% 18-20

TOTAL 2874 232 1139 158 89%
Coverage XML written to file coverage.xml

Required test coverage of 70% reached. Total coverage: 89.09%
================ 512 passed, 151 warnings in 474.56s (0:07:54) =================
Performing Post build task...
Match found for : : True
Logical operation result is TRUE
Running script : #!/bin/bash
source activate rapids
cd /var/jenkins_home/
python test_res_push.py "https://api.GitHub.com/repos/NVIDIA/NVTabular/issues/$ghprbPullId/comments" "/var/jenkins_home/jobs/$JOB_NAME/builds/$BUILD_NUMBER/log"
[nvtabular_tests] $ /bin/bash /tmp/jenkins4552748000930826983.sh

@benfred benfred requested a review from jperez999 Sep 14, 2020
@benfred benfred merged commit 25851f1 into NVIDIA:main Sep 14, 2020
1 check passed
1 check passed
Jenkins Unit Test Run Success
Details
@benfred benfred deleted the benfred:docs_fixes branch Sep 14, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Linked issues

Successfully merging this pull request may close these issues.

None yet

3 participants
You can’t perform that action at this time.