for Scalable and Reliable Machine Learning
News (RSS)
-
Aug 08, 2024 : Player emotion analysis: ML technologies of Leon casino
-
Oct 11, 2017 : RNN made easy with MXNet R
-
Jun 1, 2017 : Conditional Generative Adversarial Network with MXNet R package
-
Jan 18, 2017 : MinPy: The NumPy Interface upon MXNet’s Backend
-
Jan 7, 2017 : Bring TensorBoard to MXNet
-
Dec 14, 2016 : GPU Accelerated XGBoost
-
Nov 21, 2016 : Fusion and Runtime Compilation for NNVM and TinyFlow
-
Oct 26, 2016 : A Full Integration of XGBoost and Apache Spark
-
Sep 30, 2016 : Build your own TensorFlow with NNVM and Torch
-
Aug 19, 2016 : Recurrent Models and Examples with MXNetR
-
Aug 3, 2016 : MXNet Pascal Titan X benchmark
-
Jul 29, 2016 : Use Caffe operator in MXNet
-
Jul 2, 2016 : Support Dropout on XGBoost
-
Jun 20, 2016 : End to End Neural Art with Generative Models
-
Apr 4, 2016 : Deep3D: Automatic 2D-to-3D Video Conversion with CNNs
-
Mar 14, 2016 : XGBoost4J: Portable Distributed XGBoost in Spark, Flink and Dataflow
-
Mar 10, 2016 : An Introduction to XGBoost R package
-
Mar 8, 2016 : MXNet Scala Package Released
-
Dec 8, 2015 : Build Online Image Classification Service with Shiny and MXNetR
-
Nov 15, 2015 : Training a LSTM char-rnn in Julia to Generate Random Sentences
-
Nov 10, 2015 : Deep Learning in a Single File for Smart Devices
-
Nov 3, 2015 : Deep Learning with MXNetR
-
Oct 27, 2015 : Training Deep Net on 14 Million Images by Using A Single Machine
Who We Are
DMLC is a group to collaborate on open-source machine learning projects, with a goal of making cutting-edge large-scale machine learning widely available. The contributors includes researchers, PhD students and data scientists who are actively working on the field.
Machine Learning Libraries
MXNet
Flexible and Efficient Deep Learning Library on Heterogeneous Distributed Systems
XGBoost
General purpose gradient boosting library, including generalized linear model and gradient boosted decision trees
MinPy
The NumPy interface upon MXNet’s backend
System Components
dmlc-core
Data I/O for filesystems such as HDFS and Amazon S3, with job launchers for Yarn, MPI, ...
ps-lite
The parameter server framework for asynchronous key-value push and pull
Rabit
A light weight library providing fault tolerant Allreduce and Broadcast
mshadow
A lightweight CPU/GPU Matrix/Tensor Template Library.
Acknowledgment
We sincerely thank the following organizations (alphabetical order) for sponsoring the major developers of DMLC
and supporting the DMLC projects