-
Updated
Aug 28, 2018 - Swift
#
ridge-regression
Here are 218 public repositories matching this topic...
A simple machine learning framework written in Swift 🤖
swift
machine-learning
neural-network
genetic-algorithm
linear-regression
machine-learning-algorithms
regression
artificial-intelligence
machine-learning-library
feedforward-neural-network
kmeans
ridge-regression
polynomial-regression
backpropagation
kmeans-clustering
mlkit
lasso-regression
Predicting Amsterdam house / real estate prices using Ordinary Least Squares-, XGBoost-, KNN-, Lasso-, Ridge-, Polynomial-, Random Forest-, and Neural Network MLP Regression (via scikit-learn)
real-estate
python
machine-learning
neural-network
random-forest
lasso
xgboost
polynomial
ensemble-learning
ols
decision-trees
ridge-regression
polynomial-regression
knn
multi-layer-perceptron
amsterdam
predicting-housing-prices
lasso-regression
mlp-regressor
knn-regression
-
Updated
Apr 9, 2019 - Python
The sample question for Interview a job in Binary options
statistics
betting
bayesian
investment
arima
ridge-regression
mcmc
acd
garch
elastic-net
ets
lasso-regression
investment-strategies
kelly-criterion
investment-portfolio
markowitz-portfolio
-
Updated
Nov 3, 2018
Implemented ADMM for solving convex optimization problems such as Lasso, Ridge regression
-
Updated
Nov 13, 2018 - Jupyter Notebook
Built house price prediction model using linear regression and k nearest neighbors and used machine learning techniques like ridge, lasso, and gradient descent for optimization in Python
python
machine-learning
linear-regression
coursera
gradient-descent
ridge-regression
polynomial-regression
university-of-washington
house-price-prediction
multiple-regression
machine-learning-regression
-
Updated
Jan 20, 2018 - Jupyter Notebook
Automated Essay Scoring on The Hewlett Foundation dataset on Kaggle
machine-learning
natural-language-processing
linear-regression
sklearn
nltk
ensemble-learning
text-processing
text-analytics
ridge-regression
cohens-kappa
text-regression
lasso-regression
support-vector-regression
gradient-boosting-regressor
automatic-essay-scoring
-
Updated
Apr 26, 2018 - Jupyter Notebook
Double Descent Curve with Optical Random Features
-
Updated
May 4, 2020 - Jupyter Notebook
Open
Documentation
4
Accurate estimation and robust modelling of translation dynamics at codon resolution
-
Updated
Jun 27, 2017 - Python
Evaluate various supervised learning methods to predict cohesive energies of solids (kernel ridge regression is the best)
-
Updated
Jul 18, 2014 - Python
CRAN R Package: Ridge Regression with automatic selection of the penalty parameter
-
Updated
Mar 20, 2020 - R
Movie Recommendation System using the MovieLens dataset
machine-learning
deep-learning
neural-network
svm
collaborative-filtering
ridge-regression
movie-recommendation
movielens-dataset
-
Updated
Mar 7, 2018 - Python
Python notebooks for my graduate class on Detection, Estimation, and Learning. Intended for in-class demonstration. Notebooks illustrate a variety of concepts, from hypothesis testing to estimation to image denoising to Kalman filtering. Feel free to use or modify for your instruction or self-study.
python
machine-learning
signal-processing
detection
jupyter-notebook
regression
estimation
lasso
ridge-regression
hypothesis-testing
maximum-likelihood
teaching-materials
kalman-filter
python-notebook
lasso-regression
estimation-theory
tikhonov-regularization
-
Updated
Apr 23, 2018 - Jupyter Notebook
Demo from Data Community Bydgoszcz i Toruń, 27.02.2019
machine-learning
linear-regression
sklearn
machine-learning-algorithms
pandas
lightgbm
ridge-regression
pandas-profiling
-
Updated
Sep 27, 2019 - Jupyter Notebook
Understand the relationships between various features in relation with the sale price of a house using exploratory data analysis and statistical analysis. Applied ML algorithms such as Multiple Linear Regression, Ridge Regression and Lasso Regression in combination with cross validation. Performed parameter tuning, compared the test scores and suggested a best model to predict the final sale price of a house. Seaborn is used to plot graphs and scikit learn package is used for statistical analysis.
python
machine-learning
correlation
linear-regression
cross-validation
data-visualization
data-extraction
data-analysis
regularization
standardization
datawrangling
predictive-modeling
ridge-regression
data-exploration
k-fold
lasso-regression
encoding-library
parameter-tuning
root-mean-squared-error-metric
regression-analysis
-
Updated
Jan 19, 2018 - Jupyter Notebook
Introduction The context is the 2016 public use NH medical claims files obtained from NH CHIS (Comprehensive Health Care Information System). The dataset contains Commercial Insurance claims, and a small fraction of Medicaid and Medicare payments for dually eligible people. The primary purpose of this assignment is to test machine learning (ML) skills in a real case analysis setting. You are expected to clean and process data and then apply various ML techniques like Linear and no linear models like regularized regression, MARS, and Partitioning methods. You are expected to use at least two of R, Python and JMP software. Data details: Medical claims file for 2016 contains ~17 millions rows and ~60 columns of data, containing ~6.5 million individual medical claims. These claims are all commercial claims that were filed by healthcare providers in 2016 in the state of NH. These claims were ~88% for residents of NH and the remaining for out of state visitors who sought care in NH. Each claim consists of one or more line items, each indicating a procedure done during the doctor’s visit. Two columns indicating Billed amount and the Paid amount for the care provided, are of primary interest. The main objective is to predict “Paid amount per procedure” by mapping a plethora of features available in the dataset. It is also an expectation that you would create new features using the existing ones or external data sources. Objectives: Step 1: Take a random sample of 1 million unique claims, such that all line items related to each claim are included in the sample. This will result in a little less than 3 million rows of data. Step 2: Clean up the data, understand the distributions, and create new features if necessary. Step 3: Run predictive models using validation method of your choice. Step 4: Write a descriptive report (less than 10 pages) describing the process and your findings.
-
Updated
Jan 17, 2019 - Jupyter Notebook
Applied Machine Learning
python
machine-learning
r
tensorflow
svm
naive-bayes
linear-regression
machine-learning-algorithms
caret
regularization
ridge-regression
principal-component-analysis
principal-components
em-algorithm
nips
elasticnet
lasso-regression
iris-dataset
klar
convolu
-
Updated
Jun 15, 2016 - Python
Harvard Project - Accuracy improvement by adding seasonality premium pricing
python
linear-regression
scikit-learn
regularization
ridge-regression
k-fold
lasso-regression
airbnb-pricing-prediction
airbnb-dataset
premium-pricing
-
Updated
Dec 15, 2016 - Jupyter Notebook
Scrape Bangkok condominium listing from hipflat.com and compare ML performance
-
Updated
Apr 11, 2019 - HTML
Jupyter notebook that outlines the process of creating a machine learning predictive model. Predicts the peak "Wins Shared" by the current draft prospects based on numerous features such as college stats, projected draft pick, physical profile and age. I try out multiple models and pick the best performing one for the data from my judgement.
nba
machine-learning
neural-network
linear-regression
scikit-learn
ridge-regression
multi-layer-perceptron
nba-analytics
prospects
scikitlearn-machine-learning
nba-prediction
lasso-regression
support-vector-regression
regression-algorithms
college-basketball
-
Updated
Apr 17, 2018 - Jupyter Notebook
MDL Complexity computations and experiments from the paper "Revisiting complexity and the bias-variance tradeoff".
machine-learning
statistics
ai
linear-regression
artificial-intelligence
model-selection
complexity
mdl
ridge-regression
linear-regression-models
linear-models
minimum-description-length
bias-variance-tradeoff
bias-variance-trade
double-descent
mean-squared-error
-
Updated
Jun 19, 2020 - Jupyter Notebook
This repository contains only projects using regression analysis techniques. Examples include a comprehensive analysis of retail store expansion strategies using Lasso and Ridge regressions.
-
Updated
May 23, 2018 - Jupyter Notebook
For quick search
-
Updated
Jan 31, 2019 - Python
jolars
commented
Jun 3, 2019
On the following line, I believe that the sparsity is destroyed by the subtraction.
https://github.com/jolars/sgdnet/blob/7e9261a83263a616cc07053c47f5cde5c2333cdb/src/utils.h#L70
It is not necessarily a big problem since it is only temporarily destroyed for each column, but could still probably be better implemented through a for loop.
Analyzes weightlifting videos for correct posture
opencv
machine-learning
computer-vision
random-forest
ensemble-learning
logistic-regression
ridge-regression
mpii-dataset
weightlifting
pose-estimation
elastic-net
lasso-regression
openpose
keypoint-detection
-
Updated
May 19, 2019 - Jupyter Notebook
Implementation of some Machine Learning Algorithms from scratch
-
Updated
Jan 5, 2019 - Python
Implementations of basic machine learning algorithms
machine-learning
svm
least-squares
gaussian-mixture-models
k-means
decision-trees
ridge-regression
bayes-classifier
gaussian-processes
-
Updated
Jul 7, 2017 - Jupyter Notebook
Predicting miles per gallon (MPG) for a car using UCI dataset
-
Updated
May 14, 2017 - Jupyter Notebook
Improve this page
Add a description, image, and links to the ridge-regression topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the ridge-regression topic, visit your repo's landing page and select "manage topics."
(see branch docs)
hcat(X, 1)IterativeSolvers.cgand notIterativeSolvers.lsqras it allows specifying the operator as a linear map which is efficient and avoids copying when having to add a column for X; anyway it should be identical apart from pathological cases