For any question not answered in this file or in H2O-3 Documentation, please use:
H2O is an in-memory platform for distributed, scalable machine learning. H2O uses familiar interfaces like R, Python, Scala, Java, JSON and the Flow notebook/web interface, and works seamlessly with big data technologies like Hadoop and Spark. H2O provides implementations of many popular algorithms such as Generalized Linear Models (GLM), Gradient Boosting Machines (including XGBoost), Random Forests, Deep Neural Networks, Stacked Ensembles, Naive Bayes, Generalized Additive Models (GAM), Cox Proportional Hazards, K-Means, PCA, Word2Vec, as well as a fully automatic machine learning algorithm (H2O AutoML).
H2O is extensible so that developers can add data transformations and custom algorithms of their choice and access them through all of those clients. H2O models can be downloaded and loaded into H2O memory for scoring, or exported into POJO or MOJO format for extemely fast scoring in production. More information can be found in the H2O User Guide.
H2O-3 (this repository) is the third incarnation of H2O, and the successor to H2O-2.
- Downloading H2O-3
- Open Source Resources
- Using H2O-3 Code Artifacts (libraries)
- Building H2O-3
- Launching H2O after Building
- Building H2O on Hadoop
- Sparkling Water
- Documentation
- Citing H2O
- Community / Advisors / Investors
While most of this README is written for developers who do their own builds, most H2O users just download and use a pre-built version. If you are a Python or R user, the easiest way to install H2O is via PyPI or Anaconda (for Python) or CRAN (for R):
pip install h2o
install.packages("h2o")
For the latest stable, nightly, Hadoop (or Spark / Sparkling Water) releases, or the stand-alone H2O jar, please visit: https://h2o.ai/download
More info on downloading & installing H2O is available in the H2O User Guide.
Most people interact with three or four primary open source resources: GitHub (which you've already found), GitHub issues (for bug reports and issue tracking), Stack Overflow for H2O code/software-specific questions, and h2ostream (a Google Group / email discussion forum) for questions not suitable for Stack Overflow. There is also a Gitter H2O developer chat group, however for archival purposes & to maximize accessibility, we'd prefer that standard H2O Q&A be conducted on Stack Overflow.
You can browse and create new issues in our GitHub repository: https://github.com/h2oai/h2o-3
- You can browse and search for issues without logging in to Github:
- Click the
Issues
tab on the top of the page - Apply filter to search for particular issues
- Click the
- To create an issue (either a bug or a feature request):
- Create H2O-3 issues on the page https://github.com/h2oai/h2o-3/issues/new/choose. (Note: Sparkling Water questions should be addressed under the Sparkling Water repository.)
-
GitHub
-
GitHub issues -- file bug reports / track issues here
- The https://github.com/h2oai/h2o-3/issues page contains issues for the current H2O-3 project)
-
Stack Overflow -- ask all code/software questions here
-
Cross Validated (Stack Exchange) -- ask algorithm/theory questions here
-
h2ostream Google Group -- ask non-code related questions here
- Web: https://groups.google.com/d/forum/h2ostream
- Mail to: [email protected]
-
Gitter H2O Developer Chat
-
Documentation
- H2O User Guide (main docs): http://docs.h2o.ai/h2o/latest-stable/h2o-docs/index.html
- All H2O documenation links: http://docs.h2o.ai
- Nightly build page (nightly docs linked in page): https://s3.amazonaws.com/h2o-release/h2o/master/latest.html
-
Download (pre-built packages)
-
Website
-
Twitter -- follow us for updates and H2O news!
-
Awesome H2O -- share your H2O-powered creations with us
Every nightly build publishes R, Python, Java, and Scala artifacts to a build-specific repository. In particular, you can find Java artifacts in the maven/repo directory.
Here is an example snippet of a gradle build file using h2o-3 as a dependency. Replace x, y, z, and nnnn with valid numbers.
// h2o-3 dependency information
def h2oBranch = 'master'
def h2oBuildNumber = 'nnnn'
def h2oProjectVersion = "x.y.z.${h2oBuildNumber}"
repositories {
// h2o-3 dependencies
maven {
url "https://s3.amazonaws.com/h2o-release/h2o-3/${h2oBranch}/${h2oBuildNumber}/maven/repo/"
}
}
dependencies {
compile "ai.h2o:h2o-core:${h2oProjectVersion}"
compile "ai.h2o:h2o-algos:${h2oProjectVersion}"
compile "ai.h2o:h2o-web:${h2oProjectVersion}"
compile "ai.h2o:h2o-app:${h2oProjectVersion}"
}
Refer to the latest H2O-3 bleeding edge nightly build page for information about installing nightly build artifacts.
Refer to the h2o-droplets GitHub repository for a working example of how to use Java artifacts with gradle.
Note: Stable H2O-3 artifacts are periodically published to Maven Central (click here to search) but may substantially lag behind H2O-3 Bleeding Edge nightly builds.
Getting started with H2O development requires JDK 1.8+, Node.js, Gradle, Python and R. We use the Gradle wrapper (called gradlew
) to ensure up-to-date local versions of Gradle and other dependencies are installed in your development directory.
Building h2o
requires a properly set up R environment with required packages and Python environment with the following packages:
grip
tabulate
requests
wheel
To install these packages you can use pip or conda. If you have troubles installing these packages on Windows, please follow section Setup on Windows of this guide.
(Note: It is recommended to use some virtual environment such as VirtualEnv, to install all packages. )
To build H2O from the repository, perform the following steps.
# Build H2O
git clone https://github.com/h2oai/h2o-3.git
cd h2o-3
./gradlew build -x test
You may encounter problems: e.g. npm missing. Install it:
brew install npm
# Start H2O
java -jar build/h2o.jar
# Point browser to http://localhost:54321
git clone https://github.com/h2oai/h2o-3.git
cd h2o-3
./gradlew syncSmalldata
./gradlew syncRPackages
./gradlew build
Notes:
- Running tests starts five test JVMs that form an H2O cluster and requires at least 8GB of RAM (preferably 16GB of RAM).
- Running
./gradlew syncRPackages
is supported on Windows, OS X, and Linux, and is strongly recommended but not required../gradlew syncRPackages
ensures a complete and consistent environment with pre-approved versions of the packages required for tests and builds. The packages can be installed manually, but we recommend setting an ENV variable and using./gradlew syncRPackages
. To set the ENV variable, use the following format (where `${WORKSPACE} can be any path):mkdir -p ${WORKSPACE}/Rlibrary export R_LIBS_USER=${WORKSPACE}/Rlibrary
git pull
./gradlew syncSmalldata
./gradlew syncRPackages
./gradlew clean
./gradlew build
-
We recommend using
./gradlew clean
after eachgit pull
. -
Skip tests by adding
-x test
at the end the gradle build command line. Tests typically run for 7-10 minutes on a Macbook Pro laptop with 4 CPUs (8 hyperthreads) and 16 GB of RAM. -
Syncing smalldata is not required after each pull, but if tests fail due to missing data files, then try
./gradlew syncSmalldata
as the first troubleshooting step. Syncing smalldata downloads data files from AWS S3 to the smalldata directory in your workspace. The sync is incremental. Do not check in these files. The smalldata directory is in .gitignore. If you do not run any tests, you do not need the smalldata directory. -
Running
./gradlew syncRPackages
is supported on Windows, OS X, and Linux, and is strongly recommended but not required../gradlew syncRPackages
ensures a complete and consistent environment with pre-approved versions of the packages required for tests and builds. The packages can be installed manually, but we recommend setting an ENV variable and using./gradlew syncRPackages
. To set the ENV variable, use the following format (where${WORKSPACE}
can be any path):mkdir -p ${WORKSPACE}/Rlibrary export R_LIBS_USER=${WORKSPACE}/Rlibrary
./gradlew clean && ./gradlew build -x test && (export DO_FAST=1; ./gradlew dist)
open target/docs-website/h2o-docs/index.html
Root of the git repository contains a Makefile with convenient shortcuts for frequent build targets used in development.
To build h2o.jar
while skipping tests and also the building of alternative assemblies, execute
make
To build h2o.jar
using the minimal assembly, run
make minimal
The minimal assembly is well suited for developement of H2O machine learning algorithms. It doesn't bundle some heavyweight dependencies (like Hadoop) and using it saves build time as well as need to download large libraries from Maven repositories.
Step 1: Download and install WinPython.
From the command line, validate python
is using the newly installed package by using which python
(or sudo which python
). Update the Environment variable with the WinPython path.
pip install grip tabulate wheel
Install Java 1.8+ and add the appropriate directory C:\Program Files\Java\jdk1.7.0_65\bin
with java.exe to PATH in Environment Variables. To make sure the command prompt is detecting the correct Java version, run:
javac -version
The CLASSPATH variable also needs to be set to the lib subfolder of the JDK:
CLASSPATH=/<path>/<to>/<jdk>/lib
Install Node.js and add the installed directory C:\Program Files\nodejs
, which must include node.exe and npm.cmd to PATH if not already prepended.
Install R and add the bin directory to your PATH if not already included.
Install the following R packages:
To install these packages from within an R session:
pkgs <- c("RCurl", "jsonlite", "statmod", "devtools", "roxygen2", "testthat")
for (pkg in pkgs) {
if (! (pkg %in% rownames(installed.packages()))) install.packages(pkg)
}
Note that libcurl is required for installation of the RCurl R package.
Note that this packages don't cover running tests, they for building H2O only.
Finally, install Rtools, which is a collection of command line tools to facilitate R development on Windows.
NOTE: During Rtools installation, do not install Cygwin.dll.
Step 6. Install Cygwin
NOTE: During installation of Cygwin, deselect the Python packages to avoid a conflict with the Python.org package.
If Cygwin is already installed, remove the Python packages or ensure that Native Python is before Cygwin in the PATH variable.
Step 8. Git Clone h2o-3
If you don't already have a Git client, please install one. The default one can be found here http://git-scm.com/downloads. Make sure that command prompt support is enabled before the installation.
Download and update h2o-3 source codes:
git clone https://github.com/h2oai/h2o-3
cd h2o-3
./gradlew.bat build
If you encounter errors run again with
--stacktrace
for more instructions on missing dependencies.
If you don't have Homebrew, we recommend installing it. It makes package management for OS X easy.
Install Java 1.8+. To make sure the command prompt is detecting the correct Java version, run:
javac -version
Using Homebrew:
brew install node
Otherwise, install from the NodeJS website.
Install R and add the bin directory to your PATH if not already included.
Install the following R packages:
To install these packages from within an R session:
pkgs <- c("RCurl", "jsonlite", "statmod", "devtools", "roxygen2", "testthat")
for (pkg in pkgs) {
if (! (pkg %in% rownames(installed.packages()))) install.packages(pkg)
}
Note that libcurl is required for installation of the RCurl R package.
Note that this packages don't cover running tests, they for building H2O only.
Install python:
brew install python
Install pip package manager:
sudo easy_install pip
Next install required packages:
sudo pip install wheel requests tabulate
Step 5. Git Clone h2o-3
OS X should already have Git installed. To download and update h2o-3 source codes:
git clone https://github.com/h2oai/h2o-3
cd h2o-3
./gradlew build
Note: on a regular machine it may take very long time (about an hour) to run all the tests.
If you encounter errors run again with
--stacktrace
for more instructions on missing dependencies.
curl -sL https://deb.nodesource.com/setup_0.12 | sudo bash -
sudo apt-get install -y nodejs
Install Java 8. Installation instructions can be found here JDK installation. To make sure the command prompt is detecting the correct Java version, run:
javac -version
Installation instructions can be found here R installation. Click “Download R for Linux”. Click “ubuntu”. Follow the given instructions.
To install the required packages, follow the same instructions as for OS X above.
Note: If the process fails to install RStudio Server on Linux, run one of the following:
sudo apt-get install libcurl4-openssl-dev
or
sudo apt-get install libcurl4-gnutls-dev
Step 4. Git Clone h2o-3
If you don't already have a Git client:
sudo apt-get install git
Download and update h2o-3 source codes:
git clone https://github.com/h2oai/h2o-3
cd h2o-3
./gradlew build
If you encounter errors, run again using
--stacktrace
for more instructions on missing dependencies.
Make sure that you are not running as root, since
bower
will reject such a run.
curl -sL https://deb.nodesource.com/setup_16.x | sudo bash -
sudo apt-get install -y nodejs
cd /opt
sudo wget --no-cookies --no-check-certificate --header "Cookie: gpw_e24=http%3A%2F%2Fwww.oracle.com%2F; oraclelicense=accept-securebackup-cookie" "http://download.oracle.com/otn-pub/java/jdk/7u79-b15/jdk-7u79-linux-x64.tar.gz"
sudo tar xzf jdk-7u79-linux-x64.tar.gz
cd jdk1.7.0_79
sudo alternatives --install /usr/bin/java java /opt/jdk1.7.0_79/bin/java 2
sudo alternatives --install /usr/bin/jar jar /opt/jdk1.7.0_79/bin/jar 2
sudo alternatives --install /usr/bin/javac javac /opt/jdk1.7.0_79/bin/javac 2
sudo alternatives --set jar /opt/jdk1.7.0_79/bin/jar
sudo alternatives --set javac /opt/jdk1.7.0_79/bin/javac
cd /opt
sudo wget http://dl.fedoraproject.org/pub/epel/7/x86_64/e/epel-release-7-5.noarch.rpm
sudo rpm -ivh epel-release-7-5.noarch.rpm
sudo echo "multilib_policy=best" >> /etc/yum.conf
sudo yum -y update
sudo yum -y install R R-devel git python-pip openssl-devel libxml2-devel libcurl-devel gcc gcc-c++ make openssl-devel kernel-devel texlive texinfo texlive-latex-fonts libX11-devel mesa-libGL-devel mesa-libGL nodejs npm python-devel numpy scipy python-pandas
sudo pip install scikit-learn grip tabulate statsmodels wheel
mkdir ~/Rlibrary
export JAVA_HOME=/opt/jdk1.7.0_79
export JRE_HOME=/opt/jdk1.7.0_79/jre
export PATH=$PATH:/opt/jdk1.7.0_79/bin:/opt/jdk1.7.0_79/jre/bin
export R_LIBS_USER=~/Rlibrary
# install local R packages
R -e 'install.packages(c("RCurl","jsonlite","statmod","devtools","roxygen2","testthat"), dependencies=TRUE, repos="http://cran.rstudio.com/")'
cd
git clone https://github.com/h2oai/h2o-3.git
cd h2o-3
# Build H2O
./gradlew syncSmalldata
./gradlew syncRPackages
./gradlew build -x test
To start the H2O cluster locally, execute the following on the command line:
java -jar build/h2o.jar
A list of available start-up JVM and H2O options (e.g. -Xmx
, -nthreads
, -ip
), is available in the H2O User Guide.
Pre-built H2O-on-Hadoop zip files are available on the download page. Each Hadoop distribution version has a separate zip file in h2o-3.
To build H2O with Hadoop support yourself, first install sphinx for python: pip install sphinx
Then start the build by entering the following from the top-level h2o-3 directory:
export BUILD_HADOOP=1;
./gradlew build -x test;
./gradlew dist;
This will create a directory called 'target' and generate zip files there. Note that BUILD_HADOOP
is the default behavior when the username is jenkins
(refer to settings.gradle
); otherwise you have to request it, as shown above.
To build the zip files only for selected distributions use the H2O_TARGET
env variable together with BUILD_HADOOP
, for example:
export BUILD_HADOOP=1;
export H2O_TARGET=hdp2.5,hdp2.6
./gradlew build -x test;
./gradlew dist;
In the h2o-hadoop
directory, each Hadoop version has a build directory for the driver and an assembly directory for the fatjar.
You need to:
- Add a new driver directory and assembly directory (each with a
build.gradle
file) inh2o-hadoop
- Add these new projects to
h2o-3/settings.gradle
- Add the new Hadoop version to
HADOOP_VERSIONS
inmake-dist.sh
- Add the new Hadoop version to the list in
h2o-dist/buildinfo.json
Hadoop supports secure user impersonation through its Java API. A kerberos-authenticated user can be allowed to proxy any username that meets specified criteria entered in the NameNode's core-site.xml file. This impersonation only applies to interactions with the Hadoop API or the APIs of Hadoop-related services that support it (this is not the same as switching to that user on the machine of origin).
Setting up secure user impersonation (for h2o):
- Create or find an id to use as proxy which has limited-to-no access to HDFS or related services; the proxy user need only be used to impersonate a user
- (Required if not using h2odriver) If you are not using the driver (e.g. you wrote your own code against h2o's API using Hadoop), make the necessary code changes to impersonate users (see org.apache.hadoop.security.UserGroupInformation)
- In either of Ambari/Cloudera Manager or directly on the NameNode's core-site.xml file, add 2/3 properties for the user we wish to use as a proxy (replace with the simple user name - not the fully-qualified principal name).
hadoop.proxyuser.<proxyusername>.hosts
: the hosts the proxy user is allowed to perform impersonated actions on behalf of a valid user fromhadoop.proxyuser.<proxyusername>.groups
: the groups an impersonated user must belong to for impersonation to work with that proxy userhadoop.proxyuser.<proxyusername>.users
: the users a proxy user is allowed to impersonate- Example:
<property> <name>hadoop.proxyuser.myproxyuser.hosts</name> <value>host1,host2</value> </property> <property> <name>hadoop.proxyuser.myproxyuser.groups</name> <value>group1,group2</value> </property> <property> <name>hadoop.proxyuser.myproxyuser.users</name> <value>user1,user2</value> </property>
- Restart core services such as HDFS & YARN for the changes to take effect
Impersonated HDFS actions can be viewed in the hdfs audit log ('auth:PROXY' should appear in the ugi=
field in entries where this is applicable). YARN similarly should show 'auth:PROXY' somewhere in the Resource Manager UI.
To use secure impersonation with h2o's Hadoop driver:
Before this is attempted, see Risks with impersonation, below
When using the h2odriver (e.g. when running with hadoop jar ...
), specify -principal <proxy user kerberos principal>
, -keytab <proxy user keytab path>
, and -run_as_user <hadoop username to impersonate>
, in addition to any other arguments needed. If the configuration was successful, the proxy user will log in and impersonate the -run_as_user
as long as that user is allowed by either the users or groups configuration property (configured above); this is enforced by HDFS & YARN, not h2o's code. The driver effectively sets its security context as the impersonated user so all supported Hadoop actions will be performed as that user (e.g. YARN, HDFS APIs support securely impersonated users, but others may not).
- The target use case for secure impersonation is applications or services that pre-authenticate a user and then use (in this case) the h2odriver on behalf of that user. H2O's Steam is a perfect example: auth user in web app over SSL, impersonate that user when creating the h2o YARN container.
- The proxy user should have limited permissions in the Hadoop cluster; this means no permissions to access data or make API calls. In this way, if it's compromised it would only have the power to impersonate a specific subset of the users in the cluster and only from specific machines.
- Use the
hadoop.proxyuser.<proxyusername>.hosts
property whenever possible or practical. - Don't give the proxyusername's password or keytab to any user you don't want to impersonate another user (this is generally any user). The point of impersonation is not to allow users to impersonate each other. See the first bullet for the typical use case.
- Limit user logon to the machine the proxying is occurring from whenever practical.
- Make sure the keytab used to login the proxy user is properly secured and that users can't login as that id (via
su
, for instance) - Never set hadoop.proxyuser..{users,groups} to '*' or 'hdfs', 'yarn', etc. Allowing any user to impersonate hdfs, yarn, or any other important user/group should be done with extreme caution and strongly analyzed before it's allowed.
- The id performing the impersonation can be compromised like any other user id.
- Setting any
hadoop.proxyuser.<proxyusername>.{hosts,groups,users}
property to '*' can greatly increase exposure to security risk. - When users aren't authenticated before being used with the driver (e.g. like Steam does via a secure web app/API), auditability of the process/system is difficult.
$ git diff
diff --git a/h2o-app/build.gradle b/h2o-app/build.gradle
index af3b929..097af85 100644
--- a/h2o-app/build.gradle
+++ b/h2o-app/build.gradle
@@ -8,5 +8,6 @@ dependencies {
compile project(":h2o-algos")
compile project(":h2o-core")
compile project(":h2o-genmodel")
+ compile project(":h2o-persist-hdfs")
}
diff --git a/h2o-persist-hdfs/build.gradle b/h2o-persist-hdfs/build.gradle
index 41b96b2..6368ea9 100644
--- a/h2o-persist-hdfs/build.gradle
+++ b/h2o-persist-hdfs/build.gradle
@@ -2,5 +2,6 @@ description = "H2O Persist HDFS"
dependencies {
compile project(":h2o-core")
- compile("org.apache.hadoop:hadoop-client:2.0.0-cdh4.3.0")
+ compile("org.apache.hadoop:hadoop-client:2.4.1-mapr-1408")
+ compile("org.json:org.json:chargebee-1.0")
}
Sparkling Water combines two open-source technologies: Apache Spark and the H2O Machine Learning platform. It makes H2O’s library of advanced algorithms, including Deep Learning, GLM, GBM, K-Means, and Distributed Random Forest, accessible from Spark workflows. Spark users can select the best features from either platform to meet their Machine Learning needs. Users can combine Spark's RDD API and Spark MLLib with H2O’s machine learning algorithms, or use H2O independently of Spark for the model building process and post-process the results in Spark.
Sparkling Water Resources:
- Download page for pre-built packages
- Sparkling Water GitHub repository
- README
- Developer documentation
The main H2O documentation is the H2O User Guide. Visit http://docs.h2o.ai for the top-level introduction to documentation on H2O projects.
To generate the REST API documentation, use the following commands:
cd ~/h2o-3
cd py
python ./generate_rest_api_docs.py # to generate Markdown only
python ./generate_rest_api_docs.py --generate_html --github_user GITHUB_USER --github_password GITHUB_PASSWORD # to generate Markdown and HTML
The default location for the generated documentation is build/docs/REST
.
If the build fails, try gradlew clean
, then git clean -f
.
Documentation for each bleeding edge nightly build is available on the nightly build page.
If you use H2O as part of your workflow in a publication, please cite your H2O resource(s) using the following BibTex entry:
@Manual{h2o_package_or_module,
title = {package_or_module_title},
author = {H2O.ai},
year = {year},
month = {month},
note = {version_information},
url = {resource_url},
}
Formatted H2O Software citation examples:
- H2O.ai (Oct. 2016). Python Interface for H2O, Python module version 3.10.0.8. https://github.com/h2oai/h2o-3.
- H2O.ai (Oct. 2016). R Interface for H2O, R package version 3.10.0.8. https://github.com/h2oai/h2o-3.
- H2O.ai (Oct. 2016). H2O, H2O version 3.10.0.8. https://github.com/h2oai/h2o-3.
H2O algorithm booklets are available at the Documentation Homepage.
@Manual{h2o_booklet_name,
title = {booklet_title},
author = {list_of_authors},
year = {year},
month = {month},
url = {link_url},
}
Formatted booklet citation examples:
Arora, A., Candel, A., Lanford, J., LeDell, E., and Parmar, V. (Oct. 2016). Deep Learning with H2O. http://docs.h2o.ai/h2o/latest-stable/h2o-docs/booklets/DeepLearningBooklet.pdf.
Click, C., Lanford, J., Malohlava, M., Parmar, V., and Roark, H. (Oct. 2016). Gradient Boosted Models with H2O. http://docs.h2o.ai/h2o/latest-stable/h2o-docs/booklets/GBMBooklet.pdf.
H2O has been built by a great many number of contributors over the years both within H2O.ai (the company) and the greater open source community. You can begin to contribute to H2O by answering Stack Overflow questions or filing bug reports. Please join us!
SriSatish Ambati
Cliff Click
Tom Kraljevic
Tomas Nykodym
Michal Malohlava
Kevin Normoyle
Spencer Aiello
Anqi Fu
Nidhi Mehta
Arno Candel
Josephine Wang
Amy Wang
Max Schloemer
Ray Peck
Prithvi Prabhu
Brandon Hill
Jeff Gambera
Ariel Rao
Viraj Parmar
Kendall Harris
Anand Avati
Jessica Lanford
Alex Tellez
Allison Washburn
Amy Wang
Erik Eckstrand
Neeraja Madabhushi
Sebastian Vidrio
Ben Sabrin
Matt Dowle
Mark Landry
Erin LeDell
Andrey Spiridonov
Oleg Rogynskyy
Nick Martin
Nancy Jordan
Nishant Kalonia
Nadine Hussami
Jeff Cramer
Stacie Spreitzer
Vinod Iyengar
Charlene Windom
Parag Sanghavi
Navdeep Gill
Lauren DiPerna
Anmol Bal
Mark Chan
Nick Karpov
Avni Wadhwa
Ashrith Barthur
Karen Hayrapetyan
Jo-fai Chow
Dmitry Larko
Branden Murray
Jakub Hava
Wen Phan
Magnus Stensmo
Pasha Stetsenko
Angela Bartz
Mateusz Dymczyk
Micah Stubbs
Ivy Wang
Terone Ward
Leland Wilkinson
Wendy Wong
Nikhil Shekhar
Pavel Pscheidl
Michal Kurka
Veronika Maurerova
Jan Sterba
Jan Jendrusak
Sebastien Poirier
Tomáš Frýda
Ard Kelmendi
Yuliia Syzon
Adam Valenta
Marek Novotny
Zuzana Olajcova
Scientific Advisory Council
Stephen Boyd
Rob Tibshirani
Trevor Hastie
Systems, Data, FileSystems and Hadoop
Doug Lea
Chris Pouliot
Dhruba Borthakur
Jishnu Bhattacharjee, Nexus Venture Partners
Anand Babu Periasamy
Anand Rajaraman
Ash Bhardwaj
Rakesh Mathur
Michael Marks
Egbert Bierman
Rajesh Ambati