Skip to content
An Engine-Agnostic Deep Learning Framework in Java
Java Jupyter Notebook C C++ HTML ANTLR Other
Branch: master
Clone or download

Latest commit

lanking520 fix build issue
Change-Id: I45e0d2d43bdd6c3575d64629a45da5c4f68bb52a
Latest commit f3d62d2 Jun 6, 2020

Files

Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
.github fix build issue Jun 6, 2020
3rdparty Workaround hdfs unittest permission issue on EC2 Jun 3, 2020
android [Android] Implement ImageVisualizer May 21, 2020
api [MXNet] fix RNN op multi-gpu crash Jun 5, 2020
basicdataset Add document about non-standard image format support May 31, 2020
docker/windows Add windows docker image Jan 10, 2020
docs Improve MXNet engine troubleshooting document Jun 2, 2020
examples Add unit test for BertQa example Jun 5, 2020
fasttext/fasttext-engine Make sure NDManager's device in sync with NDArray May 30, 2020
gradle/wrapper Upgrade gradle to 6.4.1 May 19, 2020
integration Reduce jacocoTestCoverage for TensorFlow Jun 2, 2020
jupyter setup notebook ci Jun 6, 2020
model-zoo Improve classification translator builder Jun 2, 2020
mxnet [MXNet] fix RNN op multi-gpu crash Jun 5, 2020
pytorch allow pytorch pick to work as expected Jun 5, 2020
tensorflow [TensorFlow] Replace the slice with stridedSlice to support step Jun 2, 2020
testing Add missing inheritDoc Apr 9, 2020
tools Workaround hdfs unittest permission issue on EC2 Jun 3, 2020
website add inference tutorial Jun 4, 2020
.gitignore Add clang-format Feb 27, 2020
CNAME Create CNAME Nov 22, 2019
CODE_OF_CONDUCT.md Initial commit Oct 29, 2019
CONTRIBUTING.md Update contributing.md Apr 8, 2020
LICENSE Initial commit Oct 29, 2019
NOTICE Initial commit Oct 29, 2019
README.md use ImageFactory reoplace BufferedImageUtils May 13, 2020
build.gradle Allows build release package with JDK11 May 19, 2020
gradle.properties Upgrade to PyTorch 1.5 Apr 24, 2020
gradlew Add gradle wrapper scriptes. Oct 30, 2019
gradlew.bat Add gradle wrapper scriptes. Oct 30, 2019
index.html eliminate double container May 29, 2020
settings.gradle Add hdfs repository May 28, 2020

README.md

DeepJavaLibrary

Deep Java Library (DJL)

Overview

Deep Java Library (DJL) is an open-source, high-level, framework-agnostic Java API for deep learning. DJL is designed to be easy to get started with and simple to use for Java developers. DJL provides a native Java development experience and functions like any other regular Java library.

You don't have to be machine learning/deep learning expert to get started. You can use your existing Java expertise as an on-ramp to learn and use machine learning and deep learning. You can use your favorite IDE to build, train, and deploy your models. DJL makes it easy to integrate these models with your Java applications.

Because DJL is deep learning framework agnostic, you don't have to make a choice between frameworks when creating your projects. You can switch frameworks at any point. To ensure the best performance, DJL also provides automatic CPU/GPU choice based on hardware configuration.

DJL's ergonomic API interface is designed to guide you with best practices to accomplish deep learning tasks. The following pseudocode demonstrates running inference:

    // Assume user uses a pre-trained model from model zoo, they just need to load it
    Criteria<Image, Classifications> criteria =
            Criteria.builder()
                    .optApplication(Application.CV.OBJECT_DETECTION) // find object dection model
                    .setTypes(Image.class, Classifications.class) // define input and output
                    .optFilter("backbone", "resnet50") // choose network architecture
                    .build();

    try (ZooModel<Image, Classifications> model = ModelZoo.loadModel(criteria)) {
        try (Predictor<Image, Classifications> predictor = model.newPredictor()) {
            Image img = ImageFactory.getInstance().fromUrl("http://..."); // read image
            Classifications result = predictor.predict(img);

            // get the classification and probability
            ...
        }
    }

The following pseudocode demonstrates running training:

    // Construct your neural network with built-in blocks
    Block block = new Mlp(28, 28);

    try (Model model = Model.newInstance()) { // Create an empty model
        model.setBlock(block); // set neural network to model

        // Get training and validation dataset (MNIST dataset)
        Dataset trainingSet = new Mnist.Builder().setUsage(Usage.TRAIN) ... .build();
        Dataset validateSet = new Mnist.Builder().setUsage(Usage.TEST) ... .build();

        // Setup training configurations, such as Initializer, Optimizer, Loss ...
        TrainingConfig config = setupTrainingConfig();
        try (Trainer trainer = model.newTrainer(config)) {
            /*
             * Configure input shape based on dataset to initialize the trainer.
             * 1st axis is batch axis, we can use 1 for initialization.
             * MNIST is 28x28 grayscale image and pre processed into 28 * 28 NDArray.
             */
            Shape inputShape = new Shape(1, 28 * 28);
            trainer.initialize(new Shape[] {inputShape});

            TrainingUtils.fit(trainer, epoch, trainingSet, validateSet);
        }

        // Save the model
        model.save(modelDir, "mlp");
    }

Getting Started

Resources

Release Notes

Building From Source

To build from source, begin by checking out the code. Once you have checked out the code locally, you can build it as follows using Gradle:

./gradlew build

To increase build speed, you can use the following command to skip unit tests:

./gradlew build -x test

Note: SpotBugs is not compatible with JDK 11+. SpotBugs will not be executed if you are using JDK 11+.

Slack channel

Join our slack channel to get in touch with the development team, for questions and discussions.

License

This project is licensed under the Apache-2.0 License.

You can’t perform that action at this time.