Insights: NVIDIA/TensorRT
Overview
Could not load contribution data
Please try again later
2 Releases published by 2 people
-
22.08
published
Aug 17, 2022 -
8.4.3 TensorRT OSS v8.4.3
published
Aug 19, 2022
2 Pull requests merged by 1 person
-
Update TensorRT version to 8.4.3.1
#2262 merged
Aug 19, 2022 -
TensorRT 22.08 release
#2250 merged
Aug 17, 2022
1 Pull request opened by 1 person
-
Update Sample Custom Plugin README
#2254 opened
Aug 17, 2022
10 Issues closed by 9 people
-
ONNX IsInf OP Supporting
#2196 closed
Aug 19, 2022 -
Mismatched type error when generating an engine for a quantized stereo-depth model
#2131 closed
Aug 18, 2022 -
Tenssorrt choose fp32 input instead of fp16 input, enouth fp16 is faster.
#2246 closed
Aug 18, 2022 -
The performance comparison of the resent18 model using PTQ int8 quantization(FP16 VS INT8)
#2242 closed
Aug 18, 2022 -
Error Code 4: Miscellaneous (IShuffleLayer bbox_pred/reshape: reshape changes volume
#2248 closed
Aug 17, 2022 -
Can int be used as input?
#2213 closed
Aug 17, 2022 -
Internal Error (Assertion min_ <= max_ failed.)
#1634 closed
Aug 17, 2022 -
Deserialize engine file with python API failed
#2195 closed
Aug 16, 2022 -
Stuck and raise Error Code 2: Internal Error (Assertion memSize >= 0 failed. )
#2211 closed
Aug 15, 2022 -
Set_binding_shape returns False
#2215 closed
Aug 15, 2022
14 Issues opened by 14 people
-
bool type cast with max op has unusual time cost
#2261 opened
Aug 19, 2022 -
EnginePlan for trt-engine-explorer fails to parse trtexec output
#2260 opened
Aug 19, 2022 -
TensorRT 8 TopK output accuracy pretty low
#2259 opened
Aug 19, 2022 -
Conversion Error for IsInf OP
#2258 opened
Aug 19, 2022 -
Cuda 11.4 Dockerfile aarch64 packages not found
#2257 opened
Aug 18, 2022 -
Expose additional ONNX protobuf model properties via nvonnxparser::IParser
#2256 opened
Aug 18, 2022 -
[BUG] Resent50 model with wrong precision after quantization with tensorrt int8PTQ
#2255 opened
Aug 18, 2022 -
Python ILogger Subclassing example does not work
#2253 opened
Aug 17, 2022 -
Sample Mnist - can't run with DLA
#2252 opened
Aug 17, 2022 -
Error Code 4: Internal Error (Conv_1840: number of kernel weights does not match tensor dimensions)
#2251 opened
Aug 17, 2022 -
sample_mnist gives libnvcaffeparser.so.8 => not found
#2249 opened
Aug 16, 2022 -
py interface to implement export_profile export_layer_info and export_times?
#2247 opened
Aug 16, 2022
12 Unresolved conversations
Sometimes conversations happen on old items that aren’t yet closed. Here is a list of all the Issues and Pull Requests with unresolved conversations.
-
windows TensorRT inference time fluctuates greatly in some gpu drivers
#2229 commented on
Aug 17, 2022 • 7 new comments -
Installation error (on multiple sites but specifically Colab this time)
#2243 commented on
Aug 20, 2022 • 6 new comments -
QAT onnx model convert to trt engine failed
#2240 commented on
Aug 16, 2022 • 5 new comments -
have problem for converting ONNX model to TRT
#2225 commented on
Aug 15, 2022 • 2 new comments -
Different accuracy with identical models [ONNX, Accuracy]
#2182 commented on
Aug 18, 2022 • 2 new comments -
Error Code 4: Internal Error (Network must have at least one output)
#1728 commented on
Aug 17, 2022 • 1 new comment -
Myelin problem converting TensorflowV2.5.0 Object_detection_api to TensorRT on jetson nano
#1316 commented on
Aug 18, 2022 • 1 new comment -
(Could not find any implementation for node {ForeignNode[Transpose_2713 + (Unnamed Layer* 4032) [Shuffle]...MatMul_2714]}.)
#2124 commented on
Aug 19, 2022 • 1 new comment -
TensorRT 8.4.2
#2218 commented on
Aug 20, 2022 • 1 new comment -
T5 Model with TensorRT - Runtime on GPU
#1845 commented on
Aug 20, 2022 • 1 new comment -
How should I speed up T5 original exported saved_model by using TRT?
#2235 commented on
Aug 16, 2022 • 0 new comments -
Memory leak when building tensorrt engine for multiple time using loop while converting onnx model to tensorrt
#2220 commented on
Aug 20, 2022 • 0 new comments