As the leaves drop, so do the new features! It’s time for another cool breeze of capabilities newly available in our application manager (a/k/a “KOTS” (Kubernetes Off-The-Shelf), our Kubernetes installer (a/k/a “kURL”), and the vendor portal (previously “Vendor Web”). Check out the recently shipped features and release highlights for October 2022 below.
A common thread throughout this week's release is the introduction of the highly available rqlite database. The database change along with the highly available MinIO on embedded clusters allows us to simplify storage for vendors who only need local persistent volumes.
External registries let vendors keep their container repositories private. Replicated then ensures only licensed users can obtain the container relevant container images. Previously, the only way to configure an external registry was via the vendor portal or the API directly. CLI support makes the process of configuring external registries less time consuming and more easily automatable. Find more information on the Replicated CLI.
Because the database is now highly available, the application manager no longer requires distributed storage (like Rook or Longhorn) when running in a multi-node Kubernetes installer cluster. The application manager now uses rqlite instead of Postgres to store things like version information. Unlike Postgres, rqlite replicates data itself, so it doesn’t require the infrastructure to replicate its data to any node in the cluster. This eliminates the requirement that vendors use distributed storage like Rook or Longhorn in their Kubernetes installer specs when deploying multi-node clusters, and it enables the use of lighter weight storage like OpenEBS Local PV. For embedded clusters that have at least three nodes, run the kubectl kots enable-ha command to run rqlite as three replicas so that data is replicated.
This simplifies support for multi-node clusters for many vendors whose applications don’t require distributed storage, without sacrificing any reliability. Vendor apps that do require distributed storage and run on multi-node Kubernetes installer clusters can still choose to use an option like Rook.
In our first iterations of branding, CSS was included as a string in the Application custom resource, and font files were base64-encoded and included in that custom resource too. This was awkward for a few reasons: 1) it’s easier to include a CSS file in a release than to copy and paste that CSS into a YAML file and get the indentation right, and 2) base64-encoded font files were overwhelming in the YAML. Now, CSS and font files are included in the release, and paths to those files are included in the Application custom resource instead.
On restricted clusters, collecting roles and bindings in the support bundle is critical, and we were not previously doing that. If a user received permission errors for installing or working on a cluster, you had to ask the end user to get role and binding information for you. With this change, the cluster resources collector will collect it so that you can identify RBAC issues easier, speeding up problem resolution.
If a namespace is not passed to kubectl commands, the namespace from the current context is used. But the kots CLI didn’t operate this way, so you always had to pass a namespace. Now the kots CLI will use the namespace from the current context as expected, making it a smoother experience.. Note that for backwards compatibility, this is not true for the kots install command. For kots install, a namespace can be passed explicitly, or the user will be prompted to enter a namespace during the install process.
kURL’s offering of Kubernetes v1.25 is now officially recognized as a certified Kubernetes distribution by the CNCF. This certification process ensures that your application will run as expected on kURL clusters and be portable to other certified kubernetes distributions. For more information on our CNCF certification process, see this CNCF page. The default spec on kurl.sh, or when a vendor creates a new app on Vendor Portal (and gets a kURL installer spec), has been updated to Kubernetes v1.25. This helps to ensure that vendors start off with the most recent Kubernetes version available if using kURL.
For vendors that need distributed storage support they want to be able to keep up with the latest versions of Rook. This offers the latest features and bug fixes available in rook 1.9.
MinIO deploys a highly-available StatefulSet with EKCO when the OpenEBS localpv storage class is enabled and at least three nodes are available. For more information, see Manage MinIO with EKCO in EKCO Add-on in the kURL documentation. This work compliments the efforts in KOTS version 1.89.0 to remove the postgres and the related distributed storage requirement for KOTS. Now if a vendor does not need distributed storage for their app, they can deploy multi-node clusters with the Kubernetes Installer using OpenEBS with Local PV instead of Rook/Ceph or Longhorn.
When Rook is installed on the cluster but not included in the updated kURL spec, the OpenEBS add-on version 3.3.0 and later automatically migrates any Rook-backed PersistentVolumeClaims (PVCs) to OpenEBS Local PV. This supported migration path benefits the following use cases, (1) for vendors that do not need distributed storage for their app and have upgraded to KOTS version v1.89.0+ (which removed the requirement for distributed storage for KOTS itself), or (2) for vendors that wish to right-size clusters for applications that only ever needed one node, but had rook/ceph because that was the past default. In these use cases, vendors are encouraged to move from Rook/Ceph to OpenEBS with LocalPV. This effort will benefit vendors who are looking for easier paths to upgrade off of old add-on versions. Related Migrating kURL CSI docs.
We are introducing a new CNI, Flannel, as our intended go-forward CNI. Flannel's simplicity and ubiquity make it our future choice for networking. It is relatively easy to install and configure. From an administrative perspective, it offers a simple networking model that's suitable for most use cases. It is offered by default by many common Kubernetes cluster deployment tools and in many Kubernetes distributions. For now we recommend staying with Weave for customer deployments, but will soon have a migration path to flannel CNI. See the following links for additional information: Architectural decision record for introduction of Flannel add-on. Current add-on limitations.
The Kubernetes Installer Requirements section in the Enterprise docs now includes a much more comprehensive list of the system requirements that must be met to install an application on a kURL cluster. This helps ensure that users who are searching on docs.replicated.com for information about Kubernetes installer requirements can find what they are looking for quickly. Previously, this page listed only the minimum system requirements and linked users to kurl.sh for more information.
The RandomString section in the Reference docs features a more detailed description of how to generate ephemeral and persistent strings with the readonly and hidden properties. Updated formatting in this section also makes the information easier to skim and highlights the key use cases for setting readonly and hidden to true or false. This improvement is in response to a request from the Solutions Engineering team, who shared that it is common for vendors to require a random string to generate an initial password for a service like Postgres. Previously, the description was high-level, which made it difficult for many users to understand how to generate the persistent or ephemeral string required for their use case.
During the operation of a cluster, it can sometimes be necessary to expand or contract hard drive space for an application. The CRE team recognized that with no guiding documentation available, many vendors were simply guessing at how to perform these procedures. This new documentation provides vendors with more guidance on disk space expansion and contraction procedures when using Rook or OpenEBS with Local PV.
Vendors using kURL for embedded clusters have asked us to provide more guidance on how to choose the right storage. As a first step, we’ve added a new page to kurl.sh to help kURL users choose the right PV provisioner for their use case. We’ve also updated the default kURL spec to point to this guide (see screenshot)
This makes vendor onboarding experience easier. Previously, it wasn’t clear in the TOC navigation what milestones would be achieved when doing the Getting Started tutorials because they were written as one long page. Now the UI tutorial and the CLI tutorial are split into smaller milestones that feel/look achievable. A vendor can easily revisit specific sections for review.
Vendors routinely asked which fields can and cannot be templated. The Application custom resource topic has been updated with this information so that vendors could self-serve and reduce the support burden.
We updated the documentation on how to reset, reboot, and remove/replace nodes on kURL clusters with more clear requirements, prerequisites, warnings, and detailed steps. The goal of getting this enhancement is to make it easy for vendors and enterprises to find complete guidance on performing maintenance tasks.
That’s it for the October release highlights! Want to learn more about these new features and what Replicated does to help vendors and customers install and manage modern apps on-prem? We would love to show you -- click here to schedule a demo.