GPyTorch

PyTorch 中高斯过程的高效模块化实现。「A highly efficient and modular implementation of Gaussian Processes in PyTorch」

Github星跟蹤圖

GPyTorch

GPyTorch 是一个使用 PyTorch 实现的高斯过程库。GPyTorch 旨在轻松创建可伸缩的、灵活的、模块化的高斯过程模型。

在内部,GPyTorch 不同于许多现有的 GP 推理方法,它使用现代数值线性代数技术(如预条件共轭梯度)执行所有推理操作。实施可扩展的 GP 方法非常简单,就像通过我们的 LazyTensor 接口提供一个包含内核矩阵及其导数的矩阵乘法例程,或者通过组合现有的许多 LazyTensor 一样简单。这不仅可以轻松实现流行的可扩展 GP 技术,而且通常还可以显着提高 GPU 计算的利用率。

GPyTorch 提供(1)显着的 GPU 加速(通过基于 MVM 的推理);(2)可扩展性和灵活性(SKI/KISS-GP随机 Lanczos 扩展LOVESKIP随机变分 深度内核学习 等)的最新算法进步的最新实现;(3)易于与深度学习框架集成。

示例,教程和文档

请参阅我们的大量示例和教程,了解如何在 GPyTorch 中构造各种模型。

安装

要求:
  • Python>= 3.6
  • PyTorch>= 1.5

使用pip或conda安装GPyTorch:

pip install gpytorch
conda install gpytorch -c gpytorch
最新(不稳定)版本

要升级到最新(不稳定)版本,请运行

pip install --upgrade git+https://github.com/cornellius-gp/gpytorch.git
ArchLinux 软件包

注意:实验性 AUR 套件。 对于大多数用户,我们建议通过 conda 或 pip 安装。

GPyTorch 也可以在 ArchLinux 用户存储库(AUR)中使用。 您可以使用 yay 这样的 AUR 帮助器 进行安装,如下所示:

yay -S python-gpytorch

要讨论与此 AUR 软件包相关的任何问题,请参阅 python-gpytorch 的注释部分。

引用我们

如果您使用 GPyTorch,请引用以下论文:

Gardner, Jacob R., Geoff Pleiss, David Bindel, Kilian Q. Weinberger, and Andrew Gordon Wilson. "GPyTorch: Blackbox Matrix-Matrix Gaussian Process Inference with GPU Acceleration." In Advances in Neural Information Processing Systems (2018).

@inproceedings{gardner2018gpytorch,
  title={GPyTorch: Blackbox Matrix-Matrix Gaussian Process Inference with GPU Acceleration},
  author={Gardner, Jacob R and Pleiss, Geoff and Bindel, David and Weinberger, Kilian Q and Wilson, Andrew Gordon},
  booktitle={Advances in Neural Information Processing Systems},
  year={2018}
}

开发

要运行单元测试:

python -m unittest

默认情况下,对于某些测试,随机种子是锁定的。如果希望在不锁定种子的情况下运行测试,请运行

UNLOCK_SEED=true python -m unittest

如果您打算提交请求请求,请利用我们的预提交钩子,以确保您的提交符合仓库所执行的一般样式准则。 为此,请导航到本地存储库并运行:

pip install pre-commit
pre-commit install

从那时起,每当您提交给 gpytorch 或它的分支时,它将自动在您提交的文件上运行 flake8、isort、black 和其他工具。

团队

GPyTorch 主要由以下人员维护:

Cornell Logo

Facebook Logo

Uber AI Logo

我们要感谢其他贡献者,包括(但不限于)David Arbour, Eytan Bakshy, David Eriksson, Jared Frank, Sam Stanton, Bram Wallace, Ke Alexander Wang, Ruihan Wu。

致谢

比利和梅琳达·盖茨基金会美国国家科学基金会SAP 的资助为 GPyTorch 的开发提供了支持。


(The first version translated by vz on 2020.07.26)

主要指標

概覽
名稱與所有者cornellius-gp/gpytorch
主編程語言Python
編程語言Python (語言數: 1)
平台Linux, Mac, Windows
許可證MIT License
所有者活动
創建於2017-06-09 14:48:20
推送於2025-03-11 14:20:18
最后一次提交
發布數41
最新版本名稱v1.14 (發布於 )
第一版名稱alpha (發布於 )
用户参与
星數3.7k
關注者數56
派生數564
提交數3.9k
已啟用問題?
問題數1337
打開的問題數347
拉請求數841
打開的拉請求數34
關閉的拉請求數56
项目设置
已啟用Wiki?
已存檔?
是復刻?
已鎖定?
是鏡像?
是私有?

GPyTorch


News: GPyTorch v1.0.0

GPyTorch v1.0.0 has just been released. This release marks our exit from beta status and in to what we consider stable software. This means that we do not expect you to encounter any major bugs when using stable features. Check out the release notes, as well as our fully revamped documentation and example notebooks.


Build status
Documentation Status

GPyTorch is a Gaussian process library implemented using PyTorch. GPyTorch is designed for creating scalable, flexible, and modular Gaussian process models with ease.

Internally, GPyTorch differs from many existing approaches to GP inference by performing all inference operations using modern numerical linear algebra techniques like preconditioned conjugate gradients. Implementing a scalable GP method is as simple as providing a matrix multiplication routine with the kernel matrix and its derivative via our LazyTensor interface, or by composing many of our already existing LazyTensors. This allows not only for easy implementation of popular scalable GP techniques, but often also for significantly improved utilization of GPU computing compared to solvers based on the Cholesky decomposition.

GPyTorch provides (1) significant GPU acceleration (through MVM based inference); (2) state-of-the-art implementations of the latest algorithmic advances for scalability and flexibility (SKI/KISS-GP, stochastic Lanczos expansions, LOVE, SKIP, stochastic variational deep kernel learning, ...); (3) easy integration with deep learning frameworks.

Examples, Tutorials, and Documentation

See our numerous examples and tutorials on how to construct all sorts of models in GPyTorch.

Installation

Requirements:

  • Python >= 3.6
  • PyTorch >= 1.3

Install GPyTorch using pip or conda:

pip install gpytorch
conda install gpytorch -c gpytorch

(To use packages globally but install GPyTorch as a user-only package, use pip install --user above.)

Latest (unstable) version

To upgrade to the latest (unstable) version, run

pip install --upgrade git+https://github.com/cornellius-gp/gpytorch.git

ArchLinux Package

Note: Experimental AUR package. For most users, we recommend installation by conda or pip.

GPyTorch is also available on the ArchLinux User Repository (AUR).
You can install it with an AUR helper, like yay, as follows:

yay -S python-gpytorch

To discuss any issues related to this AUR package refer to the comments section of
python-gpytorch.

Citing Us

If you use GPyTorch, please cite the following papers:

Gardner, Jacob R., Geoff Pleiss, David Bindel, Kilian Q. Weinberger, and Andrew Gordon Wilson. "GPyTorch: Blackbox Matrix-Matrix Gaussian Process Inference with GPU Acceleration." In Advances in Neural Information Processing Systems (2018).

@inproceedings{gardner2018gpytorch,
  title={GPyTorch: Blackbox Matrix-Matrix Gaussian Process Inference with GPU Acceleration},
  author={Gardner, Jacob R and Pleiss, Geoff and Bindel, David and Weinberger, Kilian Q and Wilson, Andrew Gordon},
  booktitle={Advances in Neural Information Processing Systems},
  year={2018}
}

Development

To run the unit tests:

python -m unittest

By default, the random seeds are locked down for some of the tests.
If you want to run the tests without locking down the seed, run

UNLOCK_SEED=true python -m unittest

If you plan on submitting a pull request, please make use of our pre-commit hooks to ensure that your commits adhere
to the general style guidelines enforced by the repo. To do this, navigate to your local repository and run:

pip install pre-commit
pre-commit install

From then on, this will automatically run flake8, isort, black and other tools over the files you commit each time you commit to gpytorch or a fork of it.

The Team

GPyTorch is primarily maintained by:

Acknowledgements

Development of GPyTorch is supported by funding from the Bill and Melinda Gates Foundation, the National Science Foundation, and SAP.