Block or Report
Block or report dhiltgen
Report abuse
Contact GitHub support about this user’s behavior. Learn more about reporting abuse.
Report abusePopular repositories
315 contributions in the last year
| Day of Week | March Mar | April Apr | May May | June Jun | July Jul | August Aug | September Sep | October Oct | November Nov | December Dec | January Jan | February Feb | |||||||||||||||||||||||||||||||||||||||||
| Sunday Sun | |||||||||||||||||||||||||||||||||||||||||||||||||||||
| Monday Mon | |||||||||||||||||||||||||||||||||||||||||||||||||||||
| Tuesday Tue | |||||||||||||||||||||||||||||||||||||||||||||||||||||
| Wednesday Wed | |||||||||||||||||||||||||||||||||||||||||||||||||||||
| Thursday Thu | |||||||||||||||||||||||||||||||||||||||||||||||||||||
| Friday Fri | |||||||||||||||||||||||||||||||||||||||||||||||||||||
| Saturday Sat | |||||||||||||||||||||||||||||||||||||||||||||||||||||
Less
No contributions.
Low contributions.
Medium-low contributions.
Medium-high contributions.
High contributions.
More
Contribution activity
February 2024
Created 35 commits in 2 repositories
Created a pull request in ggerganov/llama.cpp that received 7 comments
Fix 2 cuda memory leaks in ggml-cuda.cu
In our downstream usage of the server, we noticed it wouldn't fully unload the GPU when idle. Using the cuda memory leak detection tool I was able …
+35
−6
lines changed
•
7
comments
Opened 21 other pull requests in 1 repository
ollama/ollama
15
merged
5
closed
1
open
-
Document setting server vars for windows
This contribution was made on Feb 19
-
Explicitly disable AVX2 on GPU builds
This contribution was made on Feb 19
-
Fix cuda leaks
This contribution was made on Feb 19
-
Harden AMD driver lookup logic
This contribution was made on Feb 17
-
Fix duplicate menus on update and exit on signals
This contribution was made on Feb 16
-
Move LLM library extraction to stable location
This contribution was made on Feb 16
-
Explicitly disable AVX2 on GPU builds
This contribution was made on Feb 15
-
Harden the OLLAMA_HOST lookup for quotes
This contribution was made on Feb 15
-
Fix a couple duplicate instance bugs
This contribution was made on Feb 15
-
Windows App preview
This contribution was made on Feb 13
-
Adjust ROCm v5 build
This contribution was made on Feb 12
-
Detect AMD GPU info via sysfs and block old cards
This contribution was made on Feb 12
-
More robust shutdown
This contribution was made on Feb 9
-
Ensure the libraries are present
This contribution was made on Feb 8
-
More robust shutdown logic
This contribution was made on Feb 7
-
Bump llama.cpp to b2081
This contribution was made on Feb 6
-
Move hub auth out to new package
This contribution was made on Feb 5
-
Get paths right for first run, and deps
This contribution was made on Feb 5
-
Fit and finish, clean up cruft on uninstall
This contribution was made on Feb 4
-
Revamp the windows tray code
This contribution was made on Feb 4
-
Harden generate patching model
This contribution was made on Feb 2
Reviewed 14 pull requests in 2 repositories
ollama/ollama
12 pull requests
-
Update llama.cpp submodule to
66c1968f7This contribution was made on Feb 20 -
Add support for MIG mode detection and use
This contribution was made on Feb 19
-
Add support for libcudart.so for CUDA devices (Adds Jetson support)
This contribution was made on Feb 16
-
Windows Preview
This contribution was made on Feb 15
-
Fix a couple duplicate instance bugs
This contribution was made on Feb 15
-
Windows App preview
This contribution was made on Feb 15
-
set
shutting_downtofalseonce shutdown is completeThis contribution was made on Feb 14 -
Update llama.cpp submodule to
099afc6This contribution was made on Feb 12 -
Always add token to cache_tokens
This contribution was made on Feb 12
-
Accomodate split cuda lib dir
This contribution was made on Feb 5
-
Revamp the windows tray code
This contribution was made on Feb 4
-
update submodule to
1cfb5372cf5707c8ec6dde7c874f4a44a6c4c915This contribution was made on Feb 1
ggerganov/llama.cpp
2 pull requests
-
Fix 2 cuda memory leaks in ggml-cuda.cu
This contribution was made on Feb 19
-
Wire up graceful server shutdown
This contribution was made on Feb 16
Created an issue in ollama/ollama that received 5 comments
Add support for older AMD GPU gfx803 (e.g. Radeon RX 580)
Officially ROCm no longer supports these cards, but it looks like other projects have found workarounds. Let's explore if that's possible. Best cas…
5
comments
Opened 2 other issues in 1 repository
ollama/ollama
1
open
1
closed
-
Add ROCm support on windows
This contribution was made on Feb 19
-
Windows GPU libraries compiled with AVX2 instead of AVX
This contribution was made on Feb 15


