- San Jose, CA
Block or Report
Block or report rmccorm4
Contact GitHub support about this user’s behavior. Learn more about reporting abuse.
Report abusePinned
-
NVIDIA/TensorRT Public
TensorRT is a C++ library for high performance inference on NVIDIA GPUs and deep learning accelerators.
-
The Triton Inference Server provides an optimized cloud and edge inferencing solution.
-
-
-
Tiny-Imagenet-200 Public
🔬 Some personal research code on analyzing CNNs. Started with a thorough exploration of Stanford's Tiny-Imagenet-200 dataset. -
NumberPhile Public
A repository for simulating some of the interesting mathematics problems discussed on the popular YouTube channel, NumberPhile. One implementation done so far is a visualization of the golden ratio…
Python 6
282 contributions in the last year
Activity overview
Contribution activity
July 2022
Created 9 commits in 3 repositories
Created a pull request in triton-inference-server/core that received 5 comments
Verify startup_models (--load-model args) exist
Detect non-existent startup models and raise an error on failing to load them Server QA test update: triton-inference-server/server#4681
Opened 6 other pull requests in 2 repositories
triton-inference-server/server
1
open
4
merged
triton-inference-server/core
1
merged
Reviewed 25 pull requests in 6 repositories
triton-inference-server/server
17 pull requests
- Fix L0_backend_python
- Move tensorflow default version from 1 to 2
- Fix formatting and L0 copyright
- Fix L0_decoupled
- Fix L0_decoupled
- Fix expected byte size calculation
- Create extension_logging.md
- Update model lifecycle test for edge case
- Add an example to demonstrate shape tensor handling
- Documentation Update: Fix outdated Helm chart instructions for prometheus. Add model loading example
- Replace perf_client usage with unit tests
- Fix the max_batch_size specification in benchmark tests
- Fix for optional secrets.token in deploy/fleetcommand helm chart
- Prefix Logs with Request ID
- [DO NOT MERGE] error codes 21.02
- Add documentation regarding GRPC streaming use case
- Fix collision test in L0_lifecycle: use separate server logs



