Installing the Thoras license
Thoras will provide you a license key which should be made available to your installation. You have two options for managing this value:- Pass the value into the Helm release via
$.Values.imageCredentials.password
, or - Reference an existing secret
Option 1: Pass license in as Helm chart value
Pass the content of your license key file to$.Values.imageCredentials.password
in your helm release
Option 2: Reference existing secret
If you’d prefer to reference a pre-existing secret object instead of passing values into your helm release, this option is for you! Ensure a secret of typedocker-registry
exists in the thoras
namespace and
that it’s structured correctly:
- An example of how to create the secret manually:
- In the helm values file, reference this secret by setting
.Values.imageCredentials.secretRef: "thoras-license"
.
Confirm license installed successfully
You can confirm the license has been installed successfully if the pods in thethoras
namespace come up into Running
status after one or two minutes. Pod
error statuses such as ImagePullBackOff
might indicate an issue.
Configure Worker Capacity
Thoras is very lightweight but you’ll still want to allocate enough forecast workers to accommodate the number of workloads you’re managing:- The number of workers is managed by the
$.Values.thorasForecast.worker.replicas
helm value - The default number of workers is
1
- For maximum performance, Thoras recommends a
1:100
ratio ofworker:AIScaleTarget
- For example, if you have 300
AIScaleTargets
, you’ll want to set$.Values.thorasForecast.worker.replicas
to3
Persistent Volumes
To enable production-grade predictions, Thoras requires a Kubernetes persistent volume (PV) for persisting historical usage metrics. It’s critical to ensure disk persistence is configured for any clusters that host workloads that people or things rely on. Read on for an overview!Understanding Storage Requirements
You’ll want to ensure Thoras has the disk space it needs now and in the future. This may or may not require configuration based on your backend storage provider.Understanding Usage
The storage requirements for a managed workload is3GB
. So for a cluster with
twenty managed workloads, Thoras requires 60GB
of disk space
(20 AIScaleTargets * 3GB) == 60GB
Understanding Kubernetes Storage Backend
In general there are two patterns for Kubernetes storage backends with regard to disk space configuration:Growable Storage Backends
These are PV storage backends that don’t enforce a fixed size — they dynamically grow as-needed and require no Thoras configuration to specify volume size. Examples include:AWS Elastic File System (EFS)
Google Cloud Filestore (enterprise tier)
Azure Files
CephFS
Fixed-Size Storage Backends
These are PV storage backends that require explicit provisioning of a fixed capacity and enforce the size specified in thePersistentVolumeClaim
. These
backends require you to specify the desired size of the disk, which you should
configure as a Helm chart value (read on for details on how to do that).
Examples include:
AWS Elastic Block Store (EBS)
Google Persistent Disk (GCP PD)
Azure Disk Storage
VMware vSphere Volumes
- Determine how much total size will be needed using guidance above
- Configure the storage size by setting the following helm value:
Managing Persistent Volumes
You may have an established workflow for provisioning Kubernetes Persistent Volumes (PV). That’s great! Here’s how to get up and running depending on your preferred approach:Referencing an existing StorageClass
provisioner
If you have a Kubernetes StorageClass
that you’d like to use to
dynamically provision your PV (such as an
EFS StorageClass).
In that case, simply reference the storage class name in your Thoras helm chart
values.yaml
:
Referencing an existing PV
If you have a workflow for provisioning a PV and would like to simply reference the PV directly by name, add the following config block to your Helmvalues.yaml
: