#
othello
Here are 171 public repositories matching this topic...
AlphaZero implementation for Othello, Connect-Four and Tic-Tac-Toe based on "Mastering the game of Go without human knowledge" and "Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm" by DeepMind.
game
machine-learning
reinforcement-learning
deep-learning
tensorflow
tic-tac-toe
connect-four
reversi
mcts
othello
tictactoe
resnet
deepmind
connect4
alphago-zero
alpha-zero
alphazero
self-play
-
Updated
Apr 14, 2018 - Python
Artificial intelligence of the Reversi / Othello
-
Updated
Apr 26, 2018 - Java
Board Game Reinforcement Learning using AlphaZero method. including Makhos (Thai Checkers), Reversi, Connect Four, Tic-tac-toe game rules
reinforcement-learning
tensorflow
keras
tic-tac-toe
connect-four
reversi
othello
checkers
draughts
alphago-zero
alphazero
-
Updated
Apr 11, 2018 - Python
UCThello - a board game demonstrator (Othello variant) with computer AI using Monte Carlo Tree Search (MCTS) with UCB (Upper Confidence Bounds) applied to trees (UCT in short)
game
board-game
mobile
ai
simulation
mobile-app
artificial-intelligence
mcts
othello
mobile-game
entertainment
ucb
uct
monte-carlo-tree-search
ai-players
upper-confidence-bounds
abstract-game
perfect-information
2-player-strategy-game
-
Updated
Mar 30, 2018 - JavaScript
Third approach to Reinforcement Learning in Two Player games
reinforcement-learning
pytorch
othello
tictactoe
deeplearning
connect4
reinforcement-agents
reinforcement-learning-playground
-
Updated
Oct 1, 2018 - Python
Othello game and my AI practicing platform in Java
-
Updated
Oct 11, 2017 - Java
Othello AI (AlphaGo's PV-MCTS algorithm)
-
Updated
Nov 7, 2018 - Python
Othello reinforcement learning game playing engine
-
Updated
Jun 30, 2016 - Python
Playing Othello(Reversi) By Reinforcement Learning
-
Updated
Sep 14, 2017 - Python
HybridAlpha - a mix between AlphaGo Zero and AlphaZero for multiple games
python
machine-learning
deep-learning
tensorflow
keras
deep-reinforcement-learning
pytorch
extensible
mcts
neural-networks
othello
tictactoe
resnet
flexibility
alpha-beta-pruning
greedy-algorithms
gobang
connect4
alphago-zero
alpha-zero
-
Updated
May 23, 2020 - Python
Visualisation of MCTS in Unity with C# for different games, being created for my third year university project at the University of York
visualization
game
ai
university
csharp
unity
tic-tac-toe
visualisation
mcts
othello
dissertation
connect4
mcts-visualisation
-
Updated
Jun 12, 2018 - C#
Othello game (versus computer AI agent) implemented in Python. Try to see whether you can beat it!
-
Updated
Feb 24, 2018 - Python
panstromek
commented
Jul 21, 2018
something that would easily allow creating valid config or even better catch invalid config in compilation time - now it is just container, I want the class to be resposible for its state
Creating Othello in Unity complete with AI using negamax of variable depth
-
Updated
Mar 25, 2017 - C#
An advanced AI to play the 2-player board game Othello
-
Updated
Jan 14, 2017 - Java
AngularJS: refactoring an old 1.3 application using TypeScript, Webpack and the 1.5+ components
-
Updated
May 9, 2017 - TypeScript
First try to Othello/Reversi.
-
Updated
Oct 6, 2019 - Java
A MinMax based Othello/Reversi AI for 8x8 & 10x10
game
cpp
reversi
othello
heuristic
alpha-beta-pruning
game-ai
minmax-algorithm
iterative-deepening-search
minmax
-
Updated
Dec 4, 2018 - C++
Othello/Reversi AI - minimax search with alpha beta pruning
-
Updated
Dec 13, 2017 - C++
A java implementation of reversi / othello for two players/computers
java
player
arm
assembly
random-generation
reversi
othello
assembly-language
assembly-language-programming
othello-game
reversi-game
armsim
-
Updated
Mar 2, 2018 - Java
~ AI & Learning for Othello Game
-
Updated
Sep 27, 2018 - Java
A multiplayer Reversi (Othello) browser game implemented using serverless WebRTC
-
Updated
Oct 31, 2017 - JavaScript
Improve this page
Add a description, image, and links to the othello topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the othello topic, visit your repo's landing page and select "manage topics."
During self-play phase we usually collect different examples for the same board states. Should we preprocess such examples before optimizing the NNet? In the current implementation, we don't preprocess them so we train NNet and expect different output from the same input values. I think this may be wrong.