The 60 min blitz is the most common starting point and provides a broad view on how to use PyTorch. It covers the basics all the way to constructing deep neural networks.
New to PyTorch?
![]() PyTorch Recipes
Bite-size, ready-to-deploy PyTorch code examples.
Did you switch to tags/v0.4.1 and download/update the submodule? Cd pytorch git checkout tags/v0.4.1 git checkout -b v0.4.1 git submodule update -init Sign up for free to join this conversation on GitHub.
Understand PyTorch’s Tensor library and neural networks at a high level.
This tutorial introduces the fundamental concepts of PyTorch through self-contained examples.
Use torch.nn to create and train a neural network.
Visualizing Models, Data, and Training with Tensorboard
Learn to use TensorBoard to visualize data and model training.
Finetune a pre-trained Mask R-CNN model.
Train a convolutional neural network for image classification using transfer learning.
Train a convolutional neural network for image classification using transfer learning.
Train a generative adversarial network (GAN) to generate new celebrities.
Learn to load and preprocess data from a simple dataset with PyTorch's torchaudio library.
Sequence-to-Sequence Modeling with nn.Transformer and torchtext
Learn how to train a sequence-to-sequence model that uses the nn.Transformer module.
NLP from Scratch: Classifying Names with a Character-level RNN
Build and train a basic character-level RNN to classify word from scratch without the use of torchtext. First in a series of three tutorials.
NLP from Scratch: Generating Names with a Character-level RNN
After using character-level RNN to classify names, leanr how to generate names from languages. Second in a series of three tutorials.
NLP from Scratch: Translation with a Sequence-to-sequence Network and Attention
This is the third and final tutorial on doing “NLP From Scratch”, where we write our own classes and functions to preprocess the data to do our NLP modeling tasks.
This is the third and final tutorial on doing “NLP From Scratch”, where we write our own classes and functions to preprocess the data to do our NLP modeling tasks.
Use torchtext to reprocess data from a well-known datasets containing both English and German. Then use it to train a sequence-to-sequence model.
Learn how to use PyTorch to train a Deep Q Learning (DQN) agent on the CartPole-v0 task from the OpenAI Gym.
Deploying PyTorch in Python via a REST API with Flask
Deploy a PyTorch model using Flask and expose a REST API for model inference using the example of a pretrained DenseNet 121 model which detects the image.
Introduction to TorchScript, an intermediate representation of a PyTorch model (subclass of nn.Module) that can then be run in a high-performance environment such as C++.
Learn how PyTorch provides to go from an existing Python model to a serialized representation that can be loaded and executed purely from C++, with no dependency on Python.
(optional) Exporting a Model from PyTorch to ONNX and Running it using ONNX Runtime
Convert a model defined in PyTorch into the ONNX format and then run it with ONNX Runtime.
(prototype) Introduction to Named Tensors in PyTorch
Learn how to use PyTorch to train a Deep Q Learning (DQN) agent on the CartPole-v0 task from the OpenAI Gym.
Get an overview of Channels Last memory format and understand how it is used to order NCHW tensors in memory preserving dimensions.
Walk through an end-to-end example of training a model with the C++ frontend by training a DCGAN – a kind of generative model – to generate images of MNIST digits.
Create a neural network layer with no parameters using numpy. Then use scipy to create a neural network layer that has learnable weights.
Implement a custom TorchScript operator in C++, how to build it into a shared library, how to use it in Python to define TorchScript models and lastly how to load it into a C++ application for inference workloads.
This is a continuation of the custom operator tutorial, and introduces the API we’ve built for binding C++ classes into TorchScript and Python simultaneously.
This tutorial introduces the syntax for doing *dynamic inter-op parallelism* in TorchScript.
The autograd package helps build flexible and dynamic nerural netorks. In this tutorial, exploreseveral examples of doing autograd in PyTorch C++ frontend
Learn how to use Ray Tune to find the best performing set of hyperparameters for your model.
Learn how to use torch.nn.utils.prune to sparsify your neural networks, and how to extend it to implement your own custom pruning technique.
(beta) Dynamic Quantization on an LSTM Word Language Model
Apply dynamic quantization, the easiest form of quantization, to a LSTM-based next word prediction model.
Apply the dynamic quantization on a BERT (Bidirectional Embedding Representations from Transformers) model.
(beta) Static Quantization with Eager Mode in PyTorch
Learn techniques to impove a model's accuracy = post-training static quantization, per-channel quantization, and quantization-aware training.
(beta) Quantized Transfer Learning for Computer Vision Tutorial
Learn techniques to impove a model's accuracy - post-training static quantization, per-channel quantization, and quantization-aware training.
Briefly go over all concepts and features in the distributed package. Use this document to find the distributed training technology that can best serve your application.
Learn how to implement model parallel, a distributed training technique which splits a single model onto different GPUs, rather than replicating the entire model on each GPU
Learn the basics of when to use distributed data paralle versus data parallel and work through an example to set it up.
Set up the distributed package of PyTorch, use the different communication strategies, and go over some the internals of the package.
Learn how to build distributed training using the torch.distributed.rpc package.
Implementing a Parameter Server Using Distributed RPC Framework
Walk through a through a simple example of implementing a parameter server using PyTorch’s Distributed RPC framework.
Demonstrate how to implement distributed pipeline parallelism using RPC
Implementing Batch RPC Processing Using Asynchronous Executions
Learn how to use rpc.functions.async_execution to implement batch RPC
Combining Distributed DataParallel with Distributed RPC Framework
Walk through a through a simple example of how to combine distributed data parallelism with distributed model parallelism.
Examples of PyTorch
A set of examples around pytorch in Vision, Text, Reinforcement Learning, etc.
PyTorch Cheat Sheet
Quick overview to essential PyTorch elements.
Tutorials on GitHub
Access PyTorch Tutorials from GitHub.
Latest version
Released:
No project description provided
Project description
You tried to install “pytorch”. The package named for PyTorch is “torch”
Download Pytorch V0.4.1 On Mac OsRelease historyRelease notifications | RSS feed
1.0.2
0.1.2
Download Pytorch V0.4.1 On Mac High SierraDownload files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Download Pytorch V0.4.1 On Macbook
Hashes for pytorch-1.0.2.tar.gzDownload Pytorch V0.4.1 On Mac Pro
Comments are closed.
|
AuthorWrite something about yourself. No need to be fancy, just an overview. Archives
December 2020
Categories |