I am a principal machine learning research engineer at Microsoft Azure research team @Microsoft-CISL, working on new scalable distributed algorithms for ML
-
Microsoft Gray Systems lab
- Redmond, WA
-
20:35
(UTC -07:00) - motus.github.io
- @motus2
Block or Report
Block or report motus
Report abuse
Contact GitHub support about this user’s behavior. Learn more about reporting abuse.
Report abusePinned
-
MLOS Public
Forked from microsoft/MLOS
MLOS is a Data Science powered infrastructure and methodology to democratize and automate Performance Engineering. MLOS enables continuous, instance-based, robust, and trackable systems optimization.
Python
-
-
-
zml Public
Forked from josephwecker/zml
Concise markup language + templating system inspired by haml, SLiP, sexp2xml, and tenjin.
-
dotnet/TorchSharp Public
A .NET library that provides access to the library that powers PyTorch.
-
187 contributions in the last year
Less
More
Contribution activity
May 2023
Created 22 commits in 1 repository
Created a pull request in microsoft/MLOS that received 2 comments
Opened 8 other pull requests in 2 repositories
microsoft/MLOS
7
merged
- bugfix: make mlos_core optimizer shim work when the optimization parameter name is not "score"
- fix flake8, mypy, and pylint issues
- bugfix: enforce the data types for bulk_register inputs
- Minor fixes to make our Azure example work with mlos_core SKOPT optimizer
- various cosmetic fixes to please pylint and flake8; shorter version of mock service implementations
- bugfix: test fails on Windows depending on the case of the drive letter (C: or c:)
- fail gracefully when running the start-up script with no arguments
kkanellis/MLOS
1
merged
Reviewed 24 pull requests in 1 repository
microsoft/MLOS
24 pull requests
- mlos_bench config json schema validation - cli
- mlos_core: add SMAC optimizer
- mlos_core add FLAML optimizer
- mlos_bench config json schema validation - storage
- mlos_bench config json schema validation - tunable params
- devcontainer style improvements
- mlos_bench config json schema validation: optimizers and tunable_values
- bugfix: make mlos_core optimizer shim work when the optimization parameter name is not "score"
- fix flake8, mypy, and pylint issues
- bugfix: enforce the data types for bulk_register inputs
- Fixups to some docs
- Path join tweak
- Reorg Bayesian optimizers to a single class per file
- various cosmetic fixes to please pylint and flake8; shorter version of mock service implementations
- Additional tunable group merging tests
- Refactor config examples directory structure
- Fixup Makefile parallel job support
- Tunable: rename categorical_value to categorical
- Fixes for TunableGroups merging logic
- fail gracefully when running the start-up script with no arguments
- Tweaks to Tunables
- Rework a non-boot param example to a boot param
- Publish mlos_bench script
- Support top-level config file for mlos_bench.run



