Welcome to Polyaxon, a platform for building, training and monitoring large scale deep learning applications.
Polyaxon deploys into any data center, cloud provider, or can be hosted and managed by Polyaxon, and it supports all the major deep learning frameworks such as Tensorflow, MXNet, Caffe, Torch, etc.
Polyaxon makes it faster, easier, and more efficient to develop deep learning applications by managing workloads with smart container and node management. It also turns GPU servers into shared, self-service resources for your team or organization.
Here you will find a comprehensive guide for setting up Polyaxon on your cluster, and information for training and monitoring your deep learning applications.
This documentation start with quick start example, and then walks through the steps required to install and configure a complete Polyaxon deployment either in the cloud or on your own infrastructure.
Kubernetes and the Polyaxon Helm chart provide sensible defaults for an initial deployment.
To get started, go to quick start with Polyaxon to start your first experiments.
To setup a Polyaxon deployment, go to installation requirements and setup.
Once you have a Polyaxon deployment, you can check polyaxon architecture and learn how to organize your experimentation workflow.
- Extending Polyaxon deployment
- Customizing run environment
- Customize Node Scheduling
- Customize Data and Outputs
- Single Sign On (SSO)
- Replication & Concurrency
- PostgreSQL HA
- Private registries
- External repos