注意: 急于开始的读者可以直接前往快速入门 。
正在使用 Kubebuilder v1 或 v2 吗?请查看 v1 或 v2 的旧版文档。
Kubernetes 用户将通过学习 API 被设计和实现的基本概念,深入了解 Kubernetes,并且将会开发出更深刻的认识。本书将教会读者如何开发自己的 Kubernetes API,以及核心 Kubernetes API 设计的原则。
包括:
Kubernetes API 和资源的结构
API 版本语义
自愈
垃圾回收和终结器
声明式 vs 命令式 API
基于级别 vs 基于边缘的 API
资源 vs 子资源
API 扩展开发者将学习实现规范 Kubernetes API 背后的原则和概念,以及快速执行的简单工具和库。本书涵盖了扩展开发者常遇到的陷阱和误解。
包括:
如何将多个事件批量处理为单个协调调用
如何配置定期协调
即将推出
何时使用列表缓存 vs 实时查找
垃圾回收 vs 终结器
如何使用声明式 vs Webhook 验证
如何实现 API 版本管理
Kubernetes API 为对象提供了一致和明确定义的端点,这些对象遵循一致和丰富的结构。
这种方法培育了一个丰富的工具和库生态系统,用于处理 Kubernetes API。
用户通过将对象声明为 yaml 或 json 配置,并使用常见工具来管理对象来使用这些 API。
将服务构建为 Kubernetes API 相比于普通的 REST,提供了许多优势,包括:
托管的 API 端点、存储和验证。
丰富的工具和 CLI,如 kubectl
和 kustomize
。
对 AuthN 和细粒度 AuthZ 的支持。
通过 API 版本控制和转换支持 API 演进。
促进自适应/自愈 API 的发展,这些 API 可以持续响应系统状态的变化,而无需用户干预。
Kubernetes 作为托管环境
开发人员可以构建并发布自己的 Kubernetes API,以安装到运行中的 Kubernetes 集群中。
如果您想要为本书或代码做出贡献,请先阅读我们的贡献 指南。
The following diagram will help you get a better idea over the Kubebuilder concepts and architecture.
本快速入门指南将涵盖以下内容:
go 版本 v1.20.0+
docker 版本 17.03+。
kubectl 版本 v1.11.3+。
访问 Kubernetes v1.11.3+ 集群。
Kubebuilder 创建的项目包含一个 Makefile,在创建时将安装工具的版本。这些工具包括:
在 Makefile
和 go.mod
文件中定义的版本是经过测试的版本,因此建议使用指定的版本。
安装 kubebuilder :
# 下载 kubebuilder 并在本地安装。
curl -L -o kubebuilder "https://go.kubebuilder.io/dl/latest/$(go env GOOS)/$(go env GOARCH)"
chmod +x kubebuilder && mv kubebuilder /usr/local/bin/
Kubebuilder 通过命令 kubebuilder completion <bash|fish|powershell|zsh>
提供自动补全支持,可以节省大量输入。有关详细信息,请参阅自动补全 文档。
创建一个目录,然后在其中运行 init 命令以初始化一个新项目。以下是一个示例。
mkdir -p ~/projects/guestbook
cd ~/projects/guestbook
kubebuilder init --domain my.domain --repo my.domain/guestbook
运行以下命令以创建一个名为 webapp/v1
的新 API(组/版本),并在其中创建一个名为 Guestbook
的新 Kind(CRD):
kubebuilder create api --group webapp --version v1 --kind Guestbook
如果按 y
键创建资源 [y/n] 和创建控制器 [y/n],则会创建文件 api/v1/guestbook_types.go
,其中定义了 API,
以及 internal/controllers/guestbook_controller.go
,其中实现了此 Kind(CRD)的调和业务逻辑。
可选步骤: 编辑 API 定义和调和业务逻辑。有关更多信息,请参阅设计 API 和控制器概述 。
如果您正在编辑 API 定义,可以使用以下命令生成诸如自定义资源(CRs)或自定义资源定义(CRDs)之类的清单:
make manifests
点击此处查看示例。(api/v1/guestbook_types.go)
// GuestbookSpec 定义了 Guestbook 的期望状态
type GuestbookSpec struct {
// 插入其他规范字段 - 集群的期望状态
// 重要提示:在修改此文件后运行 "make" 以重新生成代码
// 实例数量
// +kubebuilder:validation:Minimum=1
// +kubebuilder:validation:Maximum=10
Size int32 `json:"size"`
// GuestbookSpec 配置的 ConfigMap 名称
// +kubebuilder:validation:MaxLength=15
// +kubebuilder:validation:MinLength=1
ConfigMapName string `json:"configMapName"`
// +kubebuilder:validation:Enum=Phone;Address;Name
Type string `json:"alias,omitempty"`
}
// GuestbookStatus 定义了 Guestbook 的观察状态
type GuestbookStatus struct {
// 插入其他状态字段 - 定义集群的观察状态
// 重要提示:在修改此文件后运行 "make" 以重新生成代码
// 活动的 Guestbook 节点的 PodName
Active string `json:"active"`
// 待机的 Guestbook 节点的 PodNames
Standby []string `json:"standby"`
}
// +kubebuilder:object:root=true
// +kubebuilder:subresource:status
// +kubebuilder:resource:scope=Cluster
// Guestbook 是 guestbooks API 的架构
type Guestbook struct {
metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`
Spec GuestbookSpec `json:"spec,omitempty"`
Status GuestbookStatus `json:"status,omitempty"`
}
您需要一个 Kubernetes 集群来运行。您可以使用 KIND 获取本地集群进行测试,或者运行在远程集群上。
将 CRD 安装到集群中:
make install
为了快速反馈和代码级调试,运行您的控制器(这将在前台运行,如果要保持运行状态,请切换到新终端):
make run
如果按 y
键创建资源 [y/n],则会在您的样本中为您的 CRD 创建一个 CR(如果已更改 API 定义,请确保先编辑它们):
kubectl apply -k config/samples/
当您的控制器准备好打包并在其他集群中进行测试时。
构建并将您的镜像推送到 IMG
指定的位置:
make docker-build docker-push IMG=<some-registry>/<project-name>:tag
使用由 IMG
指定的镜像将控制器部署到集群中:
make deploy IMG=<some-registry>/<project-name>:tag
从集群中删除您的 CRD:
make uninstall
从集群中卸载控制器:
make undeploy
现在,查看架构概念图 以获得更好的概述,并跟随CronJob 教程 ,以便通过开发演示示例项目更好地了解其工作原理。
通过遵循Operator 模式 ,不仅可以提供所有预期的资源,还可以在执行时动态、以编程方式管理它们。为了说明这个想法,想象一下,如果有人意外更改了配置或者误删了某个资源;在这种情况下,操作员可以在没有任何人工干预的情况下进行修复。
我们将创建一个示例项目,以便让您了解它是如何工作的。这个示例将会:
对账一个 Memcached CR - 代表着在集群上部署/管理的 Memcached 实例
创建一个使用 Memcached 镜像的 Deployment
不允许超过 CR 中定义的大小的实例
更新 Memcached CR 的状态
请按照以下步骤操作。
首先,创建一个用于您的项目的目录,并进入该目录,然后使用 kubebuilder
进行初始化:
mkdir $GOPATH/memcached-operator
cd $GOPATH/memcached-operator
kubebuilder init --domain=example.com
接下来,我们将创建一个新的 API,负责部署和管理我们的 Memcached 解决方案。在这个示例中,我们将使用[Deploy Image 插件][deploy-image]来获取我们解决方案的全面代码实现。
kubebuilder create api --group cache \
--version v1alpha1 \
--kind Memcached \
--image=memcached:1.4.36-alpine \
--image-container-command="memcached,-m=64,-o,modern,-v" \
--image-container-port="11211" \
--run-as-user="1001" \
--plugins="deploy-image/v1-alpha" \
--make=false
这个命令的主要目的是为 Memcached 类型生成自定义资源(CR)和自定义资源定义(CRD)。它使用 group cache.example.com
和 version v1alpha1
来唯一标识 Memcached 类型的新 CRD。通过利用 Kubebuilder 工具,我们可以为这些平台定义我们的 API 和对象。虽然在这个示例中我们只添加了一种资源类型,但您可以根据需要拥有尽可能多的 Groups
和 Kinds
。简而言之,CRD 是我们自定义对象的定义,而 CR 是它们的实例。
在这个示例中,可以看到 Memcached 类型(CRD)具有一些特定规格。这些是由 Deploy Image 插件构建的,用于管理目的的默认脚手架:
MemcachedSpec
部分是我们封装所有可用规格和配置的地方,用于我们的自定义资源(CR)。此外,值得注意的是,我们使用了状态条件。这确保了对 Memcached CR 的有效管理。当发生任何更改时,这些条件为我们提供了必要的数据,以便在 Kubernetes 集群中了解此资源的当前状态。这类似于我们为 Deployment 资源获取的状态信息。
从:api/v1alpha1/memcached_types.go
// MemcachedSpec 定义了 Memcached 的期望状态
type MemcachedSpec struct {
// 插入其他规格字段 - 集群的期望状态
// 重要:修改此文件后运行 "make" 以重新生成代码
// Size 定义了 Memcached 实例的数量
// 以下标记将使用 OpenAPI v3 schema 来验证该值
// 了解更多信息:https://book.kubebuilder.io/reference/markers/crd-validation.html
// +kubebuilder:validation:Minimum=1
// +kubebuilder:validation:Maximum=3
// +kubebuilder:validation:ExclusiveMaximum=false
Size int32 `json:"size,omitempty"`
// Port 定义了将用于使用镜像初始化容器的端口
ContainerPort int32 `json:"containerPort,omitempty"`
}
// MemcachedStatus 定义了 Memcached 的观察状态
type MemcachedStatus struct {
// 代表了 Memcached 当前状态的观察结果
// Memcached.status.conditions.type 为:"Available"、"Progressing" 和 "Degraded"
// Memcached.status.conditions.status 为 True、False、Unknown 中的一个
// Memcached.status.conditions.reason 的值应为驼峰字符串,特定条件类型的产生者可以为此字段定义预期值和含义,以及这些值是否被视为 API 的保证
// Memcached.status.conditions.Message 是一个人类可读的消息,指示有关转换的详细信息
// 了解更多信息:https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api-conventions.md#typical-status-properties
Conditions []metav1.Condition `json:"conditions,omitempty" patchStrategy:"merge" patchMergeKey:"type" protobuf:"bytes,1,rep,name=conditions"`
}
因此,当我们向此文件添加新规格并执行 make generate
命令时,我们使用 controller-gen 生成了 CRD 清单,该清单位于 config/crd/bases
目录下。
此外,值得注意的是,我们正在使用 标记
,例如 +kubebuilder:validation:Minimum=1
。这些标记有助于定义验证和标准,确保用户提供的数据 - 当他们为 Memcached 类型创建或编辑自定义资源时 - 得到适当的验证。有关可用标记的全面列表和详细信息,请参阅标记文档 。
观察 CRD 中的验证模式;此模式确保 Kubernetes API 正确验证应用的自定义资源(CR):
从:config/crd/bases/cache.example.com_memcacheds.yaml
description: MemcachedSpec 定义了 Memcached 的期望状态
properties:
containerPort:
description: Port 定义了将用于使用镜像初始化容器的端口
format: int32
type: integer
size:
description: 'Size 定义了 Memcached 实例的数量 以下标记将使用 OpenAPI v3 schema 来验证该值 了解更多信息:https://book.kubebuilder.io/reference/markers/crd-validation.html'
format: int32
maximum: 3 ## 从标记 +kubebuilder:validation:Maximum=3 生成
minimum: 1 ## 从标记 +kubebuilder:validation:Minimum=1 生成
type: integer
位于 “config/samples” 目录下的清单作为可以应用于集群的自定义资源的示例。
在这个特定示例中,通过将给定资源应用到集群中,我们将生成一个大小为 1 的 Deployment 实例(参见 size: 1
)。
从:config/samples/cache_v1alpha1_memcached.yaml
apiVersion: cache.example.com/v1alpha1
kind: Memcached
metadata:
name: memcached-sample
spec:
# TODO(用户):编辑以下值,确保 Operand 在集群上必须拥有的 Pod/实例数量
size: 1
# TODO(用户):编辑以下值,确保容器具有正确的端口进行初始化
containerPort: 11211
对账函数在确保资源和其规格之间基于其中嵌入的业务逻辑的同步方面起着关键作用。它的作用类似于循环,不断检查条件并执行操作,直到所有条件符合其实现。以下是伪代码来说明这一点:
reconcile App {
// 检查应用的 Deployment 是否存在,如果不存在则创建一个
// 如果出现错误,则重新开始对账
if err != nil {
return reconcile.Result{}, err
}
// 检查应用的 Service 是否存在,如果不存在则创建一个
// 如果出现错误,则重新开始对账
if err != nil {
return reconcile.Result{}, err
}
// 查找数据库 CR/CRD
// 检查数据库 Deployment 的副本大小
// 如果 deployment.replicas 的大小与 cr.size 不匹配,则更新它
// 然后,从头开始对账。例如,通过返回 `reconcile.Result{Requeue: true}, nil`。
if err != nil {
return reconcile.Result{Requeue: true}, nil
}
...
// 如果循环结束时:
// 所有操作都成功执行,对账就可以停止了
return reconcile.Result{}, nil
}
以下是重新开始对账的一些可能返回选项:
return ctrl.Result{}, err
return ctrl.Result{Requeue: true}, nil
停止对账,使用(执行成功之后,或者不需要再进行对账):
return ctrl.Result{}, nil
return ctrl.Result{RequeueAfter: nextRun.Sub(r.Now())}, nil
当将自定义资源应用到集群时,有一个指定的控制器来管理 Memcached 类型。您可以检查其对账是如何实现的:
从:internal/controller/memcached_controller.go
func (r *MemcachedReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {
log := log.FromContext(ctx)
// 获取 Memcached 实例
// 目的是检查是否在集群上应用了 Memcached 类型的自定义资源
// 如果没有,我们将返回 nil 以停止对账过程
memcached := &examplecomv1alpha1.Memcached{}
err := r.Get(ctx, req.NamespacedName, memcached)
if err != nil {
if apierrors.IsNotFound(err) {
// 如果找不到自定义资源,通常意味着它已被删除或尚未创建
// 这样,我们将停止对账过程
log.Info("未找到 memcached 资源。忽略,因为对象可能已被删除")
return ctrl.Result{}, nil
}
// 读取对象时出错 - 重新排队请求
log.Error(err, "获取 memcached 失败")
return ctrl.Result{}, err
}
// 当没有状态可用时,让我们将状态设置为 Unknown
if memcached.Status.Conditions == nil || len(memcached.Status.Conditions) == 0 {
meta.SetStatusCondition(&memcached.Status.Conditions,
metav1.Condition{
Type: typeAvailableMemcached,
Status: metav1.ConditionUnknown,
Reason: "对账中",
Message: "开始对账"
})
if err = r.Status().Update(ctx, memcached); err != nil {
log.Error(err, "更新 Memcached 状态失败")
return ctrl.Result{}, err
}
// 更新状态后,让我们重新获取 memcached 自定义资源
// 以便我们在集群上拥有资源的最新状态,并且避免
// 引发错误 "对象已被修改,请将您的更改应用到最新版本,然后重试"
// 如果我们尝试在后续操作中再次更新它,这将重新触发对账过程
if err := r.Get(ctx, req.NamespacedName, memcached); err != nil {
log.Error(err, "重新获取 memcached 失败")
return ctrl.Result{}, err
}
}
// 添加 finalizer。然后,我们可以定义在删除自定义资源之前应执行的一些操作。
// 更多信息:https://kubernetes.io/docs/concepts/overview/working-with-objects/finalizers
if !controllerutil.ContainsFinalizer(memcached, memcachedFinalizer) {
log.Info("为 Memcached 添加 Finalizer")
if ok := controllerutil.AddFinalizer(memcached, memcachedFinalizer); !ok {
log.Error(err, "无法将 finalizer 添加到自定义资源")
return ctrl.Result{Requeue: true}, nil
}
if err = r.Update(ctx, memcached); err != nil {
log.Error(err, "更新自定义资源以添加 finalizer 失败")
return ctrl.Result{}, err
}
}
// 检查是否标记要删除 Memcached 实例,这通过设置删除时间戳来表示。
isMemcachedMarkedToBeDeleted := memcached.GetDeletionTimestamp() != nil
if isMemcachedMarkedToBeDeleted {
if controllerutil.ContainsFinalizer(memcached, memcachedFinalizer) {
log.Info("在删除 CR 之前执行 Finalizer 操作")
// 在这里添加一个状态 "Downgrade",以反映该资源开始其终止过程。
meta.SetStatusCondition(&memcached.Status.Conditions,
metav1.Condition{
Type: typeDegradedMemcached,
Status: metav1.ConditionUnknown,
},
Reason: "Finalizing",
Message: fmt.Sprintf("执行自定义资源的 finalizer 操作:%s ", memcached.Name)})
if err := r.Status().Update(ctx, memcached); err != nil {
log.Error(err, "更新 Memcached 状态失败")
return ctrl.Result{}, err
}
// 执行在删除 finalizer 之前需要的所有操作,并允许
// Kubernetes API 删除自定义资源。
r.doFinalizerOperationsForMemcached(memcached)
// TODO(用户):如果您在 doFinalizerOperationsForMemcached 方法中添加操作
// 那么您需要确保一切顺利,然后再删除和更新 Downgrade 状态
// 否则,您应该在此重新排队。
// 在更新状态前重新获取 memcached 自定义资源
// 以便我们在集群上拥有资源的最新状态,并且避免
// 引发错误 "对象已被修改,请将您的更改应用到最新版本,然后重试"
// 如果我们尝试在后续操作中再次更新它,这将重新触发对账过程
if err := r.Get(ctx, req.NamespacedName, memcached); err != nil {
log.Error(err, "重新获取 memcached 失败")
return ctrl.Result{}, err
}
meta.SetStatusCondition(&memcached.Status.Conditions,
metav1.Condition{
Type: typeDegradedMemcached,
Status: metav1.ConditionTrue,
Reason: "Finalizing",
Message: fmt.Sprintf("自定义资源 %s 的 finalizer 操作已成功完成", memcached.Name)
})
if err := r.Status().Update(ctx, memcached); err != nil {
log.Error(err, "更新 Memcached 状态失败")
return ctrl.Result{}, err
}
log.Info("成功执行操作后移除 Memcached 的 Finalizer")
if ok := controllerutil.RemoveFinalizer(memcached, memcachedFinalizer); !ok {
log.Error(err, "移除 Memcached 的 finalizer 失败")
return ctrl.Result{Requeue: true}, nil
}
if err := r.Update(ctx, memcached); err != nil {
log.Error(err, "移除 Memcached 的 finalizer 失败")
return ctrl.Result{}, err
}
}
return ctrl.Result{}, nil
}
// 检查部署是否已经存在,如果不存在则创建新的
found := &appsv1.Deployment{}
err = r.Get(ctx, types.NamespacedName{Name: memcached.Name, Namespace: memcached.Namespace}, found)
if err != nil && apierrors.IsNotFound(err) {
// 定义一个新的部署
dep, err := r.deploymentForMemcached(memcached)
if err != nil {
log.Error(err, "为 Memcached 定义新的 Deployment 资源失败")
// 以下实现将更新状态
meta.SetStatusCondition(&memcached.Status.Conditions, metav1.Condition{
Type: typeAvailableMemcached,
Status: metav1.ConditionFalse,
Reason: "对账中",
Message: fmt.Sprintf("为自定义资源创建 Deployment 失败 (%s): (%s)", memcached.Name, err)})
if err := r.Status().Update(ctx, memcached); err != nil {
log.Error(err, "更新 Memcached 状态失败")
return ctrl.Result{}, err
}
return ctrl.Result{}, err
}
log.Info("创建新的 Deployment",
"Deployment.Namespace", dep.Namespace, "Deployment.Name", dep.Name)
if err = r.Create(ctx, dep); err != nil {
log.Error(err, "创建新的 Deployment 失败",
"Deployment.Namespace", dep.Namespace, "Deployment.Name", dep.Name)
return ctrl.Result{}, err
}
// 部署成功创建
// 我们将重新排队对账,以便确保状态
// 并继续进行下一步操作
return ctrl.Result{RequeueAfter: time.Minute}, nil
} else if err != nil {
log.Error(err, "获取 Deployment 失败")
// 让我们返回错误以重新触发对账
return ctrl.Result{}, err
}
// CRD API 定义了 Memcached 类型具有 MemcachedSpec.Size 字段
// 以设置集群上所需的 Deployment 实例数量。
// 因此,以下代码将确保 Deployment 大小与我们对账的自定义资源的 Size spec 相同。
size := memcached.Spec.Size
if *found.Spec.Replicas != size {
found.Spec.Replicas = &size
if err = r.Update(ctx, found); err != nil {
log.Error(err, "更新 Deployment 失败",
"Deployment.Namespace", found.Namespace, "Deployment.Name", found.Name)
// 在更新状态前重新获取 memcached 自定义资源
// 以便我们在集群上拥有资源的最新状态,并且避免
// 引发错误 "对象已被修改,请将您的更改应用到最新版本,然后重试"
// 如果我们尝试在后续操作中再次更新它,这将重新触发对账过程
if err := r.Get(ctx, req.NamespacedName, memcached); err != nil {
log.Error(err, "重新获取 memcached 失败")
return ctrl.Result{}, err
}
// 以下实现将更新状态
meta.SetStatusCondition(&memcached.Status.Conditions,
metav1.Condition{
Type: typeAvailableMemcached,
Status: metav1.ConditionFalse,
Reason: "调整大小",
Message: fmt.Sprintf("更新自定义资源的大小失败 (%s): (%s)", memcached.Name, err)
})
if err := r.Status().Update(ctx, memcached); err != nil {
log.Error(err, "更新 Memcached 状态失败")
return ctrl.Result{}, err
}
return ctrl.Result{}, err
}
// 现在,我们更新大小后,希望重新排队对账
// 以便确保我们拥有资源的最新状态
// 并帮助确保集群上的期望状态
return ctrl.Result{Requeue: true}, nil
}
// 以下实现将更新状态
meta.SetStatusCondition(&memcached.Status.Conditions,
metav1.Condition{
Type: typeAvailableMemcached,
Status: metav1.ConditionTrue,
Reason: "对账中",
Message: fmt.Sprintf("为自定义资源创建 %d 个副本的 Deployment 成功", memcached.Name, size)
})
if err := r.Status().Update(ctx, memcached); err != nil {
log.Error(err, "更新 Memcached 状态失败")
return ctrl.Result{}, err
}
return ctrl.Result{}, nil
}
该控制器持续地观察与该类型相关的任何事件。因此,相关的变化会立即触发控制器的对账过程。值得注意的是,我们已经实现了 watches
特性。(更多信息) 。这使我们能够监视与创建、更新或删除 Memcached 类型的自定义资源相关的事件,以及由其相应控制器编排和拥有的 Deployment。请注意以下代码:
// SetupWithManager 使用 Manager 设置控制器。
// 请注意,也将监视 Deployment 以确保其在集群中处于期望的状态
func (r *MemcachedReconciler) SetupWithManager(mgr ctrl.Manager) error {
return ctrl.NewControllerManagedBy(mgr).
For(&examplecomv1alpha1.Memcached{}). // 为 Memcached 类型创建监视
Owns(&appsv1.Deployment{}). // 为其控制器拥有的 Deployment 创建监视
Complete(r)
}
请注意,当我们创建用于运行 Memcached 镜像的 Deployment 时,我们正在设置引用:
// 为 Deployment 设置 ownerRef
// 更多信息:https://kubernetes.io/docs/concepts/overview/working-with-objects/owners-dependents/
if err := ctrl.SetControllerReference(memcached, dep, r.Scheme); err != nil {
return nil, err
}
现在通过 RBAC markers 配置了 RBAC 权限 ,用于生成和更新 config/rbac/
中的清单文件。这些标记可以在每个控制器的 Reconcile()
方法中找到(并应该被定义),请看我们示例中的实现方式:
//+kubebuilder:rbac:groups=cache.example.com,resources=memcacheds,verbs=get;list;watch;create;update;patch;delete
//+kubebuilder:rbac:groups=cache.example.com,resources=memcacheds/status,verbs=get;update;patch
//+kubebuilder:rbac:groups=cache.example.com,resources=memcacheds/finalizers,verbs=update
//+kubebuilder:rbac:groups=core,resources=events,verbs=create;patch
//+kubebuilder:rbac:groups=apps,resources=deployments,verbs=get;list;watch;create;update;patch;delete
//+kubebuilder:rbac:groups=core,resources=pods,verbs=get;list;watch
重要的是,如果您希望添加或修改 RBAC 规则,可以通过更新或添加控制器中的相应标记来实现。在进行必要的更改后,运行 make generate
命令。这将促使 controller-gen 刷新位于 config/rbac
下的文件。
对于每个类型,Kubebuilder 将生成具有查看和编辑权限的脚手架规则(例如 memcached_editor_role.yaml
和 memcached_viewer_role.yaml
)。
当您使用 make deploy IMG=myregistery/example:1.0.0
部署解决方案时,这些规则不会应用于集群。
这些规则旨在帮助系统管理员知道在授予用户组权限时应允许什么。
Manager 在监督控制器方面扮演着至关重要的角色,这些控制器进而使集群端的操作成为可能。如果您检查 cmd/main.go
文件,您会看到以下内容:
...
mgr, err := ctrl.NewManager(ctrl.GetConfigOrDie(), ctrl.Options{
Scheme: scheme,
Metrics: metricsserver.Options{BindAddress: metricsAddr},
HealthProbeBindAddress: probeAddr,
LeaderElection: enableLeaderElection,
LeaderElectionID: "1836d577.testproject.org",
// LeaderElectionReleaseOnCancel 定义了领导者在 Manager 结束时是否应主动放弃领导权。
// 这要求二进制在 Manager 停止时立即结束,否则此设置是不安全的。设置此选项显著加快主动领导者转换的速度,
// 因为新领导者无需等待 LeaseDuration 时间。
//
// 在提供的默认脚手架中,程序在 Manager 停止后立即结束,因此启用此选项是可以的。但是,
// 如果您正在进行任何操作,例如在 Manager 停止后执行清理操作,那么使用它可能是不安全的。
// LeaderElectionReleaseOnCancel: true,
})
if err != nil {
setupLog.Error(err, "无法启动 Manager")
os.Exit(1)
}
上面的代码片段概述了 Manager 的配置选项 。虽然我们在当前示例中不会更改这些选项,但了解其位置以及初始化您的基于 Operator 的镜像的过程非常重要。Manager 负责监督为您的 Operator API 生成的控制器。
此时,您可以执行 快速入门 中突出显示的命令。通过执行 make build IMG=myregistry/example:1.0.0
,您将为项目构建镜像。出于测试目的,建议将此镜像发布到公共注册表。这样可以确保轻松访问,无需额外的配置。完成后,您可以使用 make deploy IMG=myregistry/example:1.0.0
命令将镜像部署到集群中。
要深入了解开发解决方案,请考虑阅读提供的教程。
要了解优化您的方法的见解,请参阅最佳实践 文档。
许多教程都以一些非常牵强的设置或一些用于传达基础知识的玩具应用程序开头,然后在更复杂的内容上停滞不前。相反,这个教程应该带您(几乎)完整地了解 Kubebuilder 的复杂性,从简单开始逐步构建到相当全面的内容。
我们假装(当然,这有点牵强)我们终于厌倦了在 Kubernetes 中使用非 Kubebuilder 实现的 CronJob 控制器的维护负担,我们想要使用 Kubebuilder 进行重写。
CronJob 控制器的任务(不是故意的双关语)是在 Kubernetes 集群上定期间隔运行一次性任务。它通过在 Job 控制器的基础上构建来完成这一点,Job 控制器的任务是运行一次性任务一次,并确保其完成。
我们不打算试图重写 Job 控制器,而是将其视为一个机会来了解如何与外部类型交互。
如快速入门 中所述,我们需要构建一个新项目的框架。确保您已经安装了 Kubebuilder ,然后构建一个新项目:
# 创建一个项目目录,然后运行初始化命令。
mkdir project
cd project
# 我们将使用 tutorial.kubebuilder.io 作为域,
# 因此所有 API 组将是 <group>.tutorial.kubebuilder.io。
kubebuilder init --domain tutorial.kubebuilder.io --repo tutorial.kubebuilder.io/project
现在我们已经有了一个项目框架,让我们来看看 Kubebuilder 到目前为止为我们生成了什么…
在构建新项目的框架时,Kubebuilder 为我们提供了一些基本的样板文件。
首先是构建项目的基本基础设施:
go.mod
:与我们的项目匹配的新 Go 模块,具有基本依赖项
module tutorial.kubebuilder.io/project
go 1.21
require (
github.com/onsi/ginkgo/v2 v2.14.0
github.com/onsi/gomega v1.30.0
github.com/robfig/cron v1.2.0
k8s.io/api v0.29.0
k8s.io/apimachinery v0.29.0
k8s.io/client-go v0.29.0
sigs.k8s.io/controller-runtime v0.17.0
)
require (
github.com/beorn7/perks v1.0.1 // indirect
github.com/cespare/xxhash/v2 v2.2.0 // indirect
github.com/davecgh/go-spew v1.1.1 // indirect
github.com/emicklei/go-restful/v3 v3.11.0 // indirect
github.com/evanphx/json-patch/v5 v5.8.0 // indirect
github.com/fsnotify/fsnotify v1.7.0 // indirect
github.com/go-logr/logr v1.4.1 // indirect
github.com/go-logr/zapr v1.3.0 // indirect
github.com/go-openapi/jsonpointer v0.19.6 // indirect
github.com/go-openapi/jsonreference v0.20.2 // indirect
github.com/go-openapi/swag v0.22.3 // indirect
github.com/go-task/slim-sprig v0.0.0-20230315185526-52ccab3ef572 // indirect
github.com/gogo/protobuf v1.3.2 // indirect
github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da // indirect
github.com/golang/protobuf v1.5.3 // indirect
github.com/google/gnostic-models v0.6.8 // indirect
github.com/google/go-cmp v0.6.0 // indirect
github.com/google/gofuzz v1.2.0 // indirect
github.com/google/pprof v0.0.0-20210720184732-4bb14d4b1be1 // indirect
github.com/google/uuid v1.3.0 // indirect
github.com/imdario/mergo v0.3.6 // indirect
github.com/josharian/intern v1.0.0 // indirect
github.com/json-iterator/go v1.1.12 // indirect
github.com/mailru/easyjson v0.7.7 // indirect
github.com/matttproud/golang_protobuf_extensions/v2 v2.0.0 // indirect
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect
github.com/modern-go/reflect2 v1.0.2 // indirect
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 // indirect
github.com/pkg/errors v0.9.1 // indirect
github.com/prometheus/client_golang v1.18.0 // indirect
github.com/prometheus/client_model v0.5.0 // indirect
github.com/prometheus/common v0.45.0 // indirect
github.com/prometheus/procfs v0.12.0 // indirect
github.com/spf13/pflag v1.0.5 // indirect
go.uber.org/multierr v1.11.0 // indirect
go.uber.org/zap v1.26.0 // indirect
golang.org/x/exp v0.0.0-20220722155223-a9213eeb770e // indirect
golang.org/x/net v0.19.0 // indirect
golang.org/x/oauth2 v0.12.0 // indirect
golang.org/x/sys v0.16.0 // indirect
golang.org/x/term v0.15.0 // indirect
golang.org/x/text v0.14.0 // indirect
golang.org/x/time v0.3.0 // indirect
golang.org/x/tools v0.16.1 // indirect
gomodules.xyz/jsonpatch/v2 v2.4.0 // indirect
google.golang.org/appengine v1.6.7 // indirect
google.golang.org/protobuf v1.31.0 // indirect
gopkg.in/inf.v0 v0.9.1 // indirect
gopkg.in/yaml.v2 v2.4.0 // indirect
gopkg.in/yaml.v3 v3.0.1 // indirect
k8s.io/apiextensions-apiserver v0.29.0 // indirect
k8s.io/component-base v0.29.0 // indirect
k8s.io/klog/v2 v2.110.1 // indirect
k8s.io/kube-openapi v0.0.0-20231010175941-2dd684a91f00 // indirect
k8s.io/utils v0.0.0-20230726121419-3b25d923346b // indirect
sigs.k8s.io/json v0.0.0-20221116044647-bc3834ca7abd // indirect
sigs.k8s.io/structured-merge-diff/v4 v4.4.1 // indirect
sigs.k8s.io/yaml v1.4.0 // indirect
)
Makefile
:用于构建和部署控制器的 Make 目标
# Image URL to use all building/pushing image targets
IMG ?= controller:latest
# ENVTEST_K8S_VERSION refers to the version of kubebuilder assets to be downloaded by envtest binary.
ENVTEST_K8S_VERSION = 1.29.0
# Get the currently used golang install path (in GOPATH/bin, unless GOBIN is set)
ifeq (,$(shell go env GOBIN))
GOBIN=$(shell go env GOPATH)/bin
else
GOBIN=$(shell go env GOBIN)
endif
# CONTAINER_TOOL defines the container tool to be used for building images.
# Be aware that the target commands are only tested with Docker which is
# scaffolded by default. However, you might want to replace it to use other
# tools. (i.e. podman)
CONTAINER_TOOL ?= docker
# Setting SHELL to bash allows bash commands to be executed by recipes.
# Options are set to exit when a recipe line exits non-zero or a piped command fails.
SHELL = /usr/bin/env bash -o pipefail
.SHELLFLAGS = -ec
.PHONY: all
all: build
##@ General
# The help target prints out all targets with their descriptions organized
# beneath their categories. The categories are represented by '##@' and the
# target descriptions by '##'. The awk command is responsible for reading the
# entire set of makefiles included in this invocation, looking for lines of the
# file as xyz: ## something, and then pretty-format the target and help. Then,
# if there's a line with ##@ something, that gets pretty-printed as a category.
# More info on the usage of ANSI control characters for terminal formatting:
# https://en.wikipedia.org/wiki/ANSI_escape_code#SGR_parameters
# More info on the awk command:
# http://linuxcommand.org/lc3_adv_awk.php
.PHONY: help
help: ## Display this help.
@awk 'BEGIN {FS = ":.*##"; printf "\nUsage:\n make \033[36m<target>\033[0m\n"} /^[a-zA-Z_0-9-]+:.*?##/ { printf " \033[36m%-15s\033[0m %s\n", $$1, $$2 } /^##@/ { printf "\n\033[1m%s\033[0m\n", substr($$0, 5) } ' $(MAKEFILE_LIST)
##@ Development
.PHONY: manifests
manifests: controller-gen ## Generate WebhookConfiguration, ClusterRole and CustomResourceDefinition objects.
$(CONTROLLER_GEN) rbac:roleName=manager-role crd webhook paths="./..." output:crd:artifacts:config=config/crd/bases
.PHONY: generate
generate: controller-gen ## Generate code containing DeepCopy, DeepCopyInto, and DeepCopyObject method implementations.
$(CONTROLLER_GEN) object:headerFile="hack/boilerplate.go.txt" paths="./..."
.PHONY: fmt
fmt: ## Run go fmt against code.
go fmt ./...
.PHONY: vet
vet: ## Run go vet against code.
go vet ./...
.PHONY: test
test: manifests generate fmt vet envtest ## Run tests.
KUBEBUILDER_ASSETS="$(shell $(ENVTEST) use $(ENVTEST_K8S_VERSION) --bin-dir $(LOCALBIN) -p path)" go test $$(go list ./... | grep -v /e2e) -coverprofile cover.out
# Utilize Kind or modify the e2e tests to load the image locally, enabling compatibility with other vendors.
.PHONY: test-e2e # Run the e2e tests against a Kind k8s instance that is spun up.
test-e2e:
go test ./test/e2e/ -v -ginkgo.v
.PHONY: lint
lint: golangci-lint ## Run golangci-lint linter & yamllint
$(GOLANGCI_LINT) run
.PHONY: lint-fix
lint-fix: golangci-lint ## Run golangci-lint linter and perform fixes
$(GOLANGCI_LINT) run --fix
##@ Build
.PHONY: build
build: manifests generate fmt vet ## Build manager binary.
go build -o bin/manager cmd/main.go
.PHONY: run
run: manifests generate fmt vet ## Run a controller from your host.
go run ./cmd/main.go
# If you wish to build the manager image targeting other platforms you can use the --platform flag.
# (i.e. docker build --platform linux/arm64). However, you must enable docker buildKit for it.
# More info: https://docs.docker.com/develop/develop-images/build_enhancements/
.PHONY: docker-build
docker-build: ## Build docker image with the manager.
$(CONTAINER_TOOL) build -t ${IMG} .
.PHONY: docker-push
docker-push: ## Push docker image with the manager.
$(CONTAINER_TOOL) push ${IMG}
# PLATFORMS defines the target platforms for the manager image be built to provide support to multiple
# architectures. (i.e. make docker-buildx IMG=myregistry/mypoperator:0.0.1). To use this option you need to:
# - be able to use docker buildx. More info: https://docs.docker.com/build/buildx/
# - have enabled BuildKit. More info: https://docs.docker.com/develop/develop-images/build_enhancements/
# - be able to push the image to your registry (i.e. if you do not set a valid value via IMG=<myregistry/image:<tag>> then the export will fail)
# To adequately provide solutions that are compatible with multiple platforms, you should consider using this option.
PLATFORMS ?= linux/arm64,linux/amd64,linux/s390x,linux/ppc64le
.PHONY: docker-buildx
docker-buildx: ## Build and push docker image for the manager for cross-platform support
# copy existing Dockerfile and insert --platform=${BUILDPLATFORM} into Dockerfile.cross, and preserve the original Dockerfile
sed -e '1 s/\(^FROM\)/FROM --platform=\$$\{BUILDPLATFORM\}/; t' -e ' 1,// s//FROM --platform=\$$\{BUILDPLATFORM\}/' Dockerfile > Dockerfile.cross
- $(CONTAINER_TOOL) buildx create --name project-v3-builder
$(CONTAINER_TOOL) buildx use project-v3-builder
- $(CONTAINER_TOOL) buildx build --push --platform=$(PLATFORMS) --tag ${IMG} -f Dockerfile.cross .
- $(CONTAINER_TOOL) buildx rm project-v3-builder
rm Dockerfile.cross
.PHONY: build-installer
build-installer: manifests generate kustomize ## Generate a consolidated YAML with CRDs and deployment.
mkdir -p dist
echo "---" > dist/install.yaml # Clean previous content
@if [ -d "config/crd" ]; then \
$(KUSTOMIZE) build config/crd > dist/install.yaml; \
echo "---" >> dist/install.yaml; \
fi
cd config/manager && $(KUSTOMIZE) edit set image controller=${IMG}
$(KUSTOMIZE) build config/default >> dist/install.yaml
##@ Deployment
ifndef ignore-not-found
ignore-not-found = false
endif
.PHONY: install
install: manifests kustomize ## Install CRDs into the K8s cluster specified in ~/.kube/config.
$(KUSTOMIZE) build config/crd | $(KUBECTL) apply -f -
.PHONY: uninstall
uninstall: manifests kustomize ## Uninstall CRDs from the K8s cluster specified in ~/.kube/config. Call with ignore-not-found=true to ignore resource not found errors during deletion.
$(KUSTOMIZE) build config/crd | $(KUBECTL) delete --ignore-not-found=$(ignore-not-found) -f -
.PHONY: deploy
deploy: manifests kustomize ## Deploy controller to the K8s cluster specified in ~/.kube/config.
cd config/manager && $(KUSTOMIZE) edit set image controller=${IMG}
$(KUSTOMIZE) build config/default | $(KUBECTL) apply -f -
.PHONY: undeploy
undeploy: kustomize ## Undeploy controller from the K8s cluster specified in ~/.kube/config. Call with ignore-not-found=true to ignore resource not found errors during deletion.
$(KUSTOMIZE) build config/default | $(KUBECTL) delete --ignore-not-found=$(ignore-not-found) -f -
##@ Dependencies
## Location to install dependencies to
LOCALBIN ?= $(shell pwd)/bin
$(LOCALBIN):
mkdir -p $(LOCALBIN)
## Tool Binaries
KUBECTL ?= kubectl
KUSTOMIZE ?= $(LOCALBIN)/kustomize-$(KUSTOMIZE_VERSION)
CONTROLLER_GEN ?= $(LOCALBIN)/controller-gen-$(CONTROLLER_TOOLS_VERSION)
ENVTEST ?= $(LOCALBIN)/setup-envtest-$(ENVTEST_VERSION)
GOLANGCI_LINT = $(LOCALBIN)/golangci-lint-$(GOLANGCI_LINT_VERSION)
## Tool Versions
KUSTOMIZE_VERSION ?= v5.3.0
CONTROLLER_TOOLS_VERSION ?= v0.14.0
ENVTEST_VERSION ?= latest
GOLANGCI_LINT_VERSION ?= v1.54.2
.PHONY: kustomize
kustomize: $(KUSTOMIZE) ## Download kustomize locally if necessary.
$(KUSTOMIZE): $(LOCALBIN)
$(call go-install-tool,$(KUSTOMIZE),sigs.k8s.io/kustomize/kustomize/v5,$(KUSTOMIZE_VERSION))
.PHONY: controller-gen
controller-gen: $(CONTROLLER_GEN) ## Download controller-gen locally if necessary.
$(CONTROLLER_GEN): $(LOCALBIN)
$(call go-install-tool,$(CONTROLLER_GEN),sigs.k8s.io/controller-tools/cmd/controller-gen,$(CONTROLLER_TOOLS_VERSION))
.PHONY: envtest
envtest: $(ENVTEST) ## Download setup-envtest locally if necessary.
$(ENVTEST): $(LOCALBIN)
$(call go-install-tool,$(ENVTEST),sigs.k8s.io/controller-runtime/tools/setup-envtest,$(ENVTEST_VERSION))
.PHONY: golangci-lint
golangci-lint: $(GOLANGCI_LINT) ## Download golangci-lint locally if necessary.
$(GOLANGCI_LINT): $(LOCALBIN)
$(call go-install-tool,$(GOLANGCI_LINT),github.com/golangci/golangci-lint/cmd/golangci-lint,${GOLANGCI_LINT_VERSION})
# go-install-tool will 'go install' any package with custom target and name of binary, if it doesn't exist
# $1 - target path with name of binary (ideally with version)
# $2 - package url which can be installed
# $3 - specific version of package
define go-install-tool
@[ -f $(1) ] || { \
set -e; \
package=$(2)@$(3) ;\
echo "Downloading $${package}" ;\
GOBIN=$(LOCALBIN) go install $${package} ;\
mv "$$(echo "$(1)" | sed "s/-$(3)$$//")" $(1) ;\
}
endef
PROJECT
:用于构建新组件的 Kubebuilder 元数据
# Code generated by tool. DO NOT EDIT.
# This file is used to track the info used to scaffold your project
# and allow the plugins properly work.
# More info: https://book.kubebuilder.io/reference/project-config.html
domain: tutorial.kubebuilder.io
layout:
- go.kubebuilder.io/v4
projectName: project
repo: tutorial.kubebuilder.io/project
resources:
- api:
crdVersion: v1
namespaced: true
controller: true
domain: tutorial.kubebuilder.io
group: batch
kind: CronJob
path: tutorial.kubebuilder.io/project/api/v1
version: v1
webhooks:
defaulting: true
validation: true
webhookVersion: v1
version: "3"
我们还在config/
目录下获得了启动配置。目前,它只包含了Kustomize YAML 定义,用于在集群上启动我们的控制器,但一旦我们开始编写控制器,它还将包含我们的自定义资源定义、RBAC 配置和 Webhook 配置。
config/default
包含了一个Kustomize 基础配置 ,用于在标准配置中启动控制器。
每个其他目录都包含一个不同的配置部分,重构为自己的基础配置:
最后,但肯定不是最不重要的,Kubebuilder 为我们的项目生成了基本的入口点:main.go
。让我们接着看看…
emptymain.go
版权所有 2022 年 Kubernetes 作者。
根据 Apache 许可,版本 2.0(“许可”)获得许可;
除非符合许可的规定,否则您不得使用此文件。
您可以在以下网址获取许可的副本:
http://www.apache.org/licenses/LICENSE-2.0
除非适用法律要求或书面同意,否则根据许可分发的软件将按“原样“分发,
不附带任何明示或暗示的担保或条件。
请参阅许可,了解特定语言下的权限和限制。
我们的包从一些基本的导入开始。特别是:
package main
import (
"flag"
"os"
// 导入所有 Kubernetes 客户端认证插件(例如 Azure、GCP、OIDC 等)
// 以确保 exec-entrypoint 和 run 可以利用它们。
_ "k8s.io/client-go/plugin/pkg/client/auth"
"k8s.io/apimachinery/pkg/runtime"
utilruntime "k8s.io/apimachinery/pkg/util/runtime"
clientgoscheme "k8s.io/client-go/kubernetes/scheme"
_ "k8s.io/client-go/plugin/pkg/client/auth/gcp"
ctrl "sigs.k8s.io/controller-runtime"
"sigs.k8s.io/controller-runtime/pkg/cache"
"sigs.k8s.io/controller-runtime/pkg/healthz"
"sigs.k8s.io/controller-runtime/pkg/log/zap"
"sigs.k8s.io/controller-runtime/pkg/metrics/server"
"sigs.k8s.io/controller-runtime/pkg/webhook"
// +kubebuilder:scaffold:imports
)
每组控制器都需要一个
Scheme ,
它提供了 Kinds 与它们对应的 Go 类型之间的映射。稍后在编写 API 定义时,我们将更详细地讨论 Kinds,所以稍后再谈。
var (
scheme = runtime.NewScheme()
setupLog = ctrl.Log.WithName("setup")
)
func init() {
utilruntime.Must(clientgoscheme.AddToScheme(scheme))
//+kubebuilder:scaffold:scheme
}
此时,我们的主函数相当简单:
我们为指标设置了一些基本的标志。
我们实例化了一个
manager ,
它负责运行我们所有的控制器,并设置了共享缓存和客户端到 API 服务器的连接(请注意我们告诉 manager 关于我们的 Scheme)。
我们运行我们的 manager,它反过来运行我们所有的控制器和 Webhooks。
manager 被设置为在接收到优雅关闭信号之前一直运行。
这样,当我们在 Kubernetes 上运行时,我们会在 Pod 优雅终止时表现良好。
虽然目前我们没有任何东西要运行,但记住+kubebuilder:scaffold:builder
注释的位置——很快那里会变得有趣起来。
func main() {
var metricsAddr string
var enableLeaderElection bool
var probeAddr string
flag.StringVar(&metricsAddr, "metrics-bind-address", ":8080", "The address the metric endpoint binds to.")
flag.StringVar(&probeAddr, "health-probe-bind-address", ":8081", "The address the probe endpoint binds to.")
flag.BoolVar(&enableLeaderElection, "leader-elect", false,
"Enable leader election for controller manager. "+
"Enabling this will ensure there is only one active controller manager.")
opts := zap.Options{
Development: true,
}
opts.BindFlags(flag.CommandLine)
flag.Parse()
ctrl.SetLogger(zap.New(zap.UseFlagOptions(&opts)))
mgr, err := ctrl.NewManager(ctrl.GetConfigOrDie(), ctrl.Options{
Scheme: scheme,
Metrics: server.Options{
BindAddress: metricsAddr,
},
WebhookServer: webhook.NewServer(webhook.Options{Port: 9443}),
HealthProbeBindAddress: probeAddr,
LeaderElection: enableLeaderElection,
LeaderElectionID: "80807133.tutorial.kubebuilder.io",
})
if err != nil {
setupLog.Error(err, "unable to start manager")
os.Exit(1)
}
注意,Manager
可以通过以下方式限制所有控制器将监视资源的命名空间:
mgr, err := ctrl.NewManager(ctrl.GetConfigOrDie(), ctrl.Options{
Scheme: scheme,
Cache: cache.Options{
DefaultNamespaces: map[string]cache.Config{
namespace: {},
},
},
Metrics: server.Options{
BindAddress: metricsAddr,
},
WebhookServer: webhook.NewServer(webhook.Options{Port: 9443}),
HealthProbeBindAddress: probeAddr,
LeaderElection: enableLeaderElection,
LeaderElectionID: "80807133.tutorial.kubebuilder.io",
})
上面的示例将把项目的范围更改为单个Namespace
。在这种情况下,建议将提供的授权限制为此命名空间,
方法是将默认的ClusterRole
和ClusterRoleBinding
替换为Role
和RoleBinding
。
有关更多信息,请参阅 Kubernetes 关于使用 RBAC 授权 的文档。
此外,还可以使用 DefaultNamespaces
从 cache.Options{}
缓存特定一组命名空间中的对象:
var namespaces []string // 名称空间列表
defaultNamespaces := make(map[string]cache.Config)
for _, ns := range namespaces {
defaultNamespaces[ns] = cache.Config{}
}
mgr, err := ctrl.NewManager(ctrl.GetConfigOrDie(), ctrl.Options{
Scheme: scheme,
Cache: cache.Options{
DefaultNamespaces: defaultNamespaces,
},
Metrics: server.Options{
BindAddress: metricsAddr,
},
WebhookServer: webhook.NewServer(webhook.Options{Port: 9443}),
HealthProbeBindAddress: probeAddr,
LeaderElection: enableLeaderElection,
LeaderElectionID: "80807133.tutorial.kubebuilder.io",
})
有关更多信息,请参阅 cache.Options{}
// +kubebuilder:scaffold:builder
if err := mgr.AddHealthzCheck("healthz", healthz.Ping); err != nil {
setupLog.Error(err, "unable to set up health check")
os.Exit(1)
}
if err := mgr.AddReadyzCheck("readyz", healthz.Ping); err != nil {
setupLog.Error(err, "unable to set up ready check")
os.Exit(1)
}
setupLog.Info("starting manager")
if err := mgr.Start(ctrl.SetupSignalHandler()); err != nil {
setupLog.Error(err, "problem running manager")
os.Exit(1)
}
}
有了这个,我们可以开始构建我们的 API 了!
实际上,在开始创建我们的 API 之前,我们应该稍微谈一下术语。
当我们在 Kubernetes 中讨论 API 时,我们经常使用 4 个术语:groups (组)、versions (版本)、kinds (类型)和resources (资源)。
在 Kubernetes 中,API Group (API 组)简单地是相关功能的集合。每个组都有一个或多个versions (版本),正如其名称所示,允许我们随着时间的推移改变 API 的工作方式。
每个 API 组-版本包含一个或多个 API 类型,我们称之为kinds (类型)。虽然一个类型在不同版本之间可能会改变形式,但每种形式都必须能够以某种方式存储其他形式的所有数据(我们可以将数据存储在字段中,或者在注释中)。这意味着使用较旧的 API 版本不会导致较新的数据丢失或损坏。有时,同一类型可能由多个资源返回。例如,pods (Pod)资源对应于Pod 类型。然而,有时相同的类型可能由多个资源返回。例如,Scale 类型由所有规模子资源返回,比如deployments/scale 或replicasets/scale 。这就是允许 Kubernetes HorizontalPodAutoscaler 与不同资源交互的原因。然而,对于自定义资源定义(CRD),每种类型将对应于单个资源。
请注意,资源始终以小写形式存在,并且按照惯例是类型的小写形式。
当我们提到特定组-版本中的一种类型时,我们将其称为GroupVersionKind (GVK)。资源也是如此。正如我们将很快看到的那样,每个 GVK 对应于包中的给定根 Go 类型。
现在我们术语明晰了,我们可以实际地 创建我们的 API!
在接下来的添加新 API 部分中,我们将检查工具如何帮助我们使用命令kubebuilder create api
创建我们自己的 API。
这个命令的目标是为我们的类型创建自定义资源(CR)和自定义资源定义(CRD)。要进一步了解,请参阅使用自定义资源定义扩展 Kubernetes API 。
新的 API 是我们向 Kubernetes 介绍自定义对象的方式。Go 结构用于生成包括我们数据模式以及跟踪新类型名称等数据的 CRD。然后,我们可以创建我们自定义对象的实例,这些实例将由我们的controllers 管理。
我们的 API 和资源代表着我们在集群中的解决方案。基本上,CRD 是我们定制对象的定义,而 CR 是它的一个实例。
让我们想象一个经典的场景,目标是在 Kubernetes 平台上运行应用程序及其数据库。然后,一个 CRD 可以代表应用程序,另一个可以代表数据库。通过创建一个 CRD 描述应用程序,另一个描述数据库,我们不会伤害封装、单一责任原则和内聚等概念。损害这些概念可能会导致意想不到的副作用,比如扩展、重用或维护方面的困难,仅举几例。
这样,我们可以创建应用程序 CRD,它将拥有自己的控制器,并负责创建包含应用程序的部署以及创建访问它的服务等工作。类似地,我们可以创建一个代表数据库的 CRD,并部署一个负责管理数据库实例的控制器。
我们之前看到的Scheme
只是一种跟踪给定 GVK 对应的 Go 类型的方式(不要被其godocs 所压倒)。
例如,假设我们标记"tutorial.kubebuilder.io/api/v1".CronJob{}
类型属于batch.tutorial.kubebuilder.io/v1
API 组(隐含地表示它具有类型CronJob
)。
然后,我们可以根据来自 API 服务器的一些 JSON 构造一个新的&CronJob{}
,其中说
{
"kind": "CronJob",
"apiVersion": "batch.tutorial.kubebuilder.io/v1",
...
}
或者在我们提交一个&CronJob{}
进行更新时,正确查找组-版本。
要创建一个新的 Kind(你有关注上一章 的内容吗?)以及相应的控制器,我们可以使用 kubebuilder create api
命令:
kubebuilder create api --group batch --version v1 --kind CronJob
按下 y
键来选择 “Create Resource” 和 “Create Controller”。
第一次针对每个 group-version 调用此命令时,它将为新的 group-version 创建一个目录。
与你的 Go API 类型一起创建的默认 CustomResourceDefinition 清单使用 API 版本 v1
。如果你的项目打算支持早于 v1.16 的 Kubernetes 集群版本,你必须设置 --crd-version v1beta1
,并从 CRD_OPTIONS
Makefile 变量中移除 preserveUnknownFields=false
。详情请参阅CustomResourceDefinition 生成参考 。
在这种情况下,将创建一个名为 api/v1/
的目录,对应于 batch.tutorial.kubebuilder.io/v1
(还记得我们从一开始设置的 --domain
吗?)。
它还将添加一个用于我们的 CronJob
Kind 的文件,即 api/v1/cronjob_types.go
。每次使用不同的 Kind 调用该命令时,它都会添加一个相应的新文件。
让我们看看我们得到了什么,然后我们可以开始填写它。
emptyapi.go
版权所有 2022。
根据 Apache 许可证 2.0 版(“许可证”)获得许可;
除非符合许可证的规定,否则您不得使用此文件。
您可以在下面的网址获取许可证的副本
http://www.apache.org/licenses/LICENSE-2.0
除非适用法律要求或书面同意,否则根据许可证分发的软件
以“原样“为基础分发,没有任何明示或暗示的保证或条件。
请参阅特定语言管理权限和限制的许可证。
我们从简单的开始:我们导入 meta/v1
API 组,它通常不是单独公开的,而是包含所有 Kubernetes Kind 的公共元数据。
package v1
import (
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
接下来,我们为我们的 Kind 的 Spec 和 Status 定义类型。Kubernetes 通过协调期望的状态(Spec
)与实际的集群状态(其他对象的Status
)和外部状态,然后记录它观察到的内容(Status
)来运行。因此,每个 functional 对象都包括 spec 和 status。一些类型,比如 ConfigMap
不遵循这种模式,因为它们不编码期望的状态,但大多数类型都是这样。
// 编辑此文件!这是你拥有的脚手架!
// 注意:json 标记是必需的。您添加的任何新字段必须具有 json 标记,以便对字段进行序列化。
// CronJobSpec 定义了 CronJob 的期望状态
type CronJobSpec struct {
// 插入其他的 Spec 字段 - 集群的期望状态
// 重要提示:在修改此文件后运行 "make" 以重新生成代码
}
// CronJobStatus 定义了 CronJob 的观察状态
type CronJobStatus struct {
// 插入其他的状态字段 - 定义集群的观察状态
// 重要提示:在修改此文件后运行 "make" 以重新生成代码
}
接下来,我们定义与实际 Kinds 对应的类型,CronJob
和 CronJobList
。
CronJob
是我们的根类型,描述了 CronJob
类型。与所有 Kubernetes 对象一样,它包含 TypeMeta
(描述 API 版本和 Kind),
还包含 ObjectMeta
,其中包含名称、命名空间和标签等信息。
CronJobList
简单地是多个 CronJob
的容器。它是用于批量操作(如 LIST)的 Kind。
一般情况下,我们从不修改它们中的任何一个 – 所有的修改都在 Spec 或 Status 中进行。
这个小小的 +kubebuilder:object:root
注释称为标记。我们稍后会看到更多这样的标记,但要知道它们作为额外的元数据,告诉 controller-tools (我们的代码和 YAML 生成器)额外的信息。
这个特定的标记告诉 object
生成器,这个类型表示一个 Kind。然后,object
生成器为我们生成了 runtime.Object 接口的实现,这是所有表示 Kinds 的类型必须实现的标准接口。
//+kubebuilder:object:root=true
//+kubebuilder:subresource:status
// CronJob 是 cronjobs API 的架构
type CronJob struct {
metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`
Spec CronJobSpec `json:"spec,omitempty"`
Status CronJobStatus `json:"status,omitempty"`
}
//+kubebuilder:object:root=true
// CronJobList 包含 CronJob 的列表
type CronJobList struct {
metav1.TypeMeta `json:",inline"`
metav1.ListMeta `json:"metadata,omitempty"`
Items []CronJob `json:"items"`
}
最后,我们将 Go 类型添加到 API 组中。这使我们可以将此 API 组中的类型添加到任何 Scheme 中。
func init() {
SchemeBuilder.Register(&CronJob{}, &CronJobList{})
}
现在我们已经了解了基本结构,让我们继续填写它!
在 Kubernetes 中,我们有一些关于如何设计 API 的规则。特别是,所有序列化字段必须 是camelCase
,因此我们使用 JSON 结构标记来指定这一点。我们还可以使用omitempty
结构标记来标记当字段为空时应该在序列化时省略。
字段可以使用大多数基本类型。数字是个例外:出于 API 兼容性的目的,我们接受三种形式的数字:int32
和 int64
用于整数,resource.Quantity
用于小数。
等等,什么是 Quantity?
Quantity 是一种特殊的表示小数的记法,它具有明确定义的固定表示,使其在不同机器上更易于移植。在 Kubernetes 中,当指定 pod 的资源请求和限制时,您可能已经注意到了它们。
它们在概念上类似于浮点数:它们有一个有效数字、基数和指数。它们的可序列化和人类可读格式使用整数和后缀来指定值,就像我们描述计算机存储的方式一样。
例如,值2m
表示十进制记法中的0.002
。2Ki
表示十进制中的2048
,而2K
表示十进制中的2000
。如果我们想指定分数,我们可以切换到一个后缀,让我们使用整数:2.5
是2500m
。
有两种支持的基数:10 和 2(分别称为十进制和二进制)。十进制基数用“正常”的 SI 后缀表示(例如M
和K
),而二进制基数则用“mebi”记法表示(例如Mi
和Ki
)。可以参考兆字节和二进制兆字节 。
我们还使用另一种特殊类型:metav1.Time
。它的功能与time.Time
完全相同,只是它具有固定的、可移植的序列化格式。
现在,让我们来看看我们的 CronJob 对象是什么样子的!
project/api/v1/cronjob_types.go
版权所有 2024 年 Kubernetes 作者。
根据 Apache 许可证 2.0 版(以下简称“许可证”)获得许可;
除非符合许可证的规定,否则您不得使用此文件。
您可以在以下网址获取许可证的副本
http://www.apache.org/licenses/LICENSE-2.0
除非适用法律要求或书面同意,根据许可证分发的软件是基于“按原样”分发的,
没有任何明示或暗示的担保或条件。请参阅许可证以获取有关特定语言管理权限和限制的详细信息。
package v1
import (
batchv1 "k8s.io/api/batch/v1"
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
// EDIT THIS FILE! THIS IS SCAFFOLDING FOR YOU TO OWN!
// 注意:json 标记是必需的。您添加的任何新字段都必须具有字段的 json 标记以进行序列化。
首先,让我们看一下我们的规范。正如我们之前讨论过的,规范保存期望状态 ,因此我们控制器的任何“输入”都在这里。
从根本上讲,CronJob 需要以下几个部分:
一个计划(CronJob 中的 cron )
一个要运行的作业的模板(CronJob 中的 job )
我们还希望有一些额外的内容,这些将使我们的用户生活更轻松:
启动作业的可选截止时间(如果错过此截止时间,我们将等到下一个预定的时间)
如果多个作业同时运行,应该怎么办(我们等待吗?停止旧的作业?两者都运行?)
暂停运行 CronJob 的方法,以防出现问题
对旧作业历史记录的限制
请记住,由于我们从不读取自己的状态,我们需要有其他方法来跟踪作业是否已运行。我们可以使用至少一个旧作业来做到这一点。
我们将使用几个标记(// +comment
)来指定额外的元数据。这些将在生成我们的 CRD 清单时由 controller-tools 使用。
正如我们将在稍后看到的,controller-tools 还将使用 GoDoc 来形成字段的描述。
// CronJobSpec 定义了 CronJob 的期望状态
type CronJobSpec struct {
//+kubebuilder:validation:MinLength=0
// Cron 格式的计划,请参阅 https://en.wikipedia.org/wiki/Cron。
Schedule string `json:"schedule"`
//+kubebuilder:validation:Minimum=0
// 如果由于任何原因错过预定的时间,则作业启动的可选截止时间(以秒为单位)。错过的作业执行将被视为失败的作业。
// +optional
StartingDeadlineSeconds *int64 `json:"startingDeadlineSeconds,omitempty"`
// 指定如何处理作业的并发执行。
// 有效值包括:
// - "Allow"(默认):允许 CronJob 并发运行;
// - "Forbid":禁止并发运行,如果上一次运行尚未完成,则跳过下一次运行;
// - "Replace":取消当前正在运行的作业,并用新作业替换它
// +optional
ConcurrencyPolicy ConcurrencyPolicy `json:"concurrencyPolicy,omitempty"`
// 此标志告诉控制器暂停后续执行,它不适用于已经启动的执行。默认为 false。
// +optional
Suspend *bool `json:"suspend,omitempty"`
// 指定执行 CronJob 时将创建的作业。
JobTemplate batchv1.JobTemplateSpec `json:"jobTemplate"`
//+kubebuilder:validation:Minimum=0
// 要保留的成功完成作业的数量。
// 这是一个指针,用于区分明确的零和未指定的情况。
// +optional
SuccessfulJobsHistoryLimit *int32 `json:"successfulJobsHistoryLimit,omitempty"`
//+kubebuilder:validation:Minimum=0
// 要保留的失败完成作业的数量。
// 这是一个指针,用于区分明确的零和未指定的情况。
// +optional
FailedJobsHistoryLimit *int32 `json:"failedJobsHistoryLimit,omitempty"`
}
我们定义了一个自定义类型来保存我们的并发策略。实际上,它在内部只是一个字符串,但该类型提供了额外的文档,并允许我们在类型而不是字段上附加验证,使验证更容易重用。
// ConcurrencyPolicy 描述作业将如何处理。
// 只能指定以下并发答案中的一个。
// 如果没有指定以下策略之一,则默认答案是 AllowConcurrent。
// +kubebuilder:validation:Enum=Allow;Forbid;Replace
type ConcurrencyPolicy string
const (
// AllowConcurrent 允许 CronJob 并发运行。
AllowConcurrent ConcurrencyPolicy = "Allow"
// ForbidConcurrent 禁止并发运行,如果上一个作业尚未完成,则跳过下一个运行。
ForbidConcurrent ConcurrencyPolicy = "Forbid"
// ReplaceConcurrent 取消当前正在运行的作业,并用新作业替换它。
ReplaceConcurrent ConcurrencyPolicy = "Replace"
)
接下来,让我们设计我们的状态,其中包含观察到的状态。它包含我们希望用户或其他控制器能够轻松获取的任何信息。
我们将保留一个正在运行的作业列表,以及我们成功运行作业的上次时间。请注意,我们使用 metav1.Time
而不是 time.Time
来获得稳定的序列化,如上文所述。
// CronJobStatus 定义了 CronJob 的观察状态
type CronJobStatus struct {
// 插入额外的状态字段 - 定义集群的观察状态
// 重要提示:在修改此文件后,请运行“make”以重新生成代码
// 指向当前正在运行的作业的指针列表。
// +optional
Active []corev1.ObjectReference `json:"active,omitempty"`
// 作业最后成功调度的时间。
// +optional
LastScheduleTime *metav1.Time `json:"lastScheduleTime,omitempty"`
}
最后,我们有我们已经讨论过的其余样板。如前所述,除了标记我们想要一个状态子资源,以便表现得像内置的 Kubernetes 类型一样,我们不需要更改这个。
//+kubebuilder:object:root=true
//+kubebuilder:subresource:status
// CronJob 是 cronjobs API 的模式
type CronJob struct {
metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`
Spec CronJobSpec `json:"spec,omitempty"`
Status CronJobStatus `json:"status,omitempty"`
}
//+kubebuilder:object:root=true
// CronJobList 包含 CronJob 的列表
type CronJobList struct {
metav1.TypeMeta `json:",inline"`
metav1.ListMeta `json:"metadata,omitempty"`
Items []CronJob `json:"items"`
}
func init() {
SchemeBuilder.Register(&CronJob{}, &CronJobList{})
}
既然我们有了一个 API,我们需要编写一个控制器来实际实现功能。
如果你浏览了 api/v1/
目录中的其他文件,你可能会注意到除了 cronjob_types.go
外还有两个额外的文件:groupversion_info.go
和 zz_generated.deepcopy.go
。
这两个文件都不需要进行编辑(前者保持不变,后者是自动生成的),但了解它们的内容是很有用的。
groupversion_info.go
包含有关组版本的常见元数据:
project/api/v1/groupversion_info.go
版权所有 2024 年 Kubernetes 作者。
根据 Apache 许可,版本 2.0 进行许可(“许可”);
除非遵守许可,否则您不得使用此文件。
您可以在以下网址获取许可的副本:
http://www.apache.org/licenses/LICENSE-2.0
除非适用法律要求或书面同意,根据许可分发的软件是基于“按原样“的基础分发的,
不附带任何明示或暗示的担保或条件。
请参阅许可以获取特定语言下的权限和限制。
首先,我们有一些 包级别 的标记,表示此包中有 Kubernetes 对象,并且此包表示组 batch.tutorial.kubebuilder.io
。
object
生成器利用前者,而 CRD 生成器则利用后者从此包中生成正确的 CRD 元数据。
// Package v1 包含了 batch v1 API 组的 API Schema 定义
// +kubebuilder:object:generate=true
// +groupName=batch.tutorial.kubebuilder.io
package v1
import (
"k8s.io/apimachinery/pkg/runtime/schema"
"sigs.k8s.io/controller-runtime/pkg/scheme"
)
然后,我们有一些通常有用的变量,帮助我们设置 Scheme。
由于我们需要在我们的控制器中使用此包中的所有类型,有一个方便的方法将所有类型添加到某个 Scheme
中是很有帮助的(也是惯例)。SchemeBuilder 为我们简化了这一过程。
var (
// GroupVersion 是用于注册这些对象的组版本
GroupVersion = schema.GroupVersion{Group: "batch.tutorial.kubebuilder.io", Version: "v1"}
// SchemeBuilder 用于将 go 类型添加到 GroupVersionKind scheme
SchemeBuilder = &scheme.Builder{GroupVersion: GroupVersion}
// AddToScheme 将此组版本中的类型添加到给定的 scheme 中。
AddToScheme = SchemeBuilder.AddToScheme
)
zz_generated.deepcopy.go
包含了上述 runtime.Object
接口的自动生成实现,该接口标记了所有我们的根类型表示的 Kinds。
runtime.Object
接口的核心是一个深度复制方法 DeepCopyObject
。
controller-tools 中的 object
生成器还为每个根类型及其所有子类型生成了另外两个方便的方法:DeepCopy
和 DeepCopyInto
。
控制器是 Kubernetes 和任何操作者的核心。
控制器的工作是确保对于任何给定的对象,世界的实际状态(集群状态,以及可能是 Kubelet 的运行容器或云提供商的负载均衡器等外部状态)与对象中的期望状态相匹配。每个控制器专注于一个根 Kind,但可能会与其他 Kinds 交互。
我们称这个过程为调和 。
在 controller-runtime 中,实现特定 Kind 的调和逻辑称为Reconciler 。调和器接受一个对象的名称,并返回我们是否需要再次尝试(例如在出现错误或周期性控制器(如 HorizontalPodAutoscaler)的情况下)。
emptycontroller.go
版权所有 2022。
根据 Apache 许可,版本 2.0 进行许可(“许可”);
除非遵守许可,否则您不得使用此文件。
您可以在以下网址获取许可的副本:
http://www.apache.org/licenses/LICENSE-2.0
除非适用法律要求或书面同意,根据许可分发的软件是基于“按原样“的基础分发的,
不附带任何明示或暗示的担保或条件。
请参阅许可以获取特定语言下的权限和限制。
首先,我们从一些标准的导入开始。
与之前一样,我们需要核心的 controller-runtime 库,以及 client 包和我们的 API 类型包。
package controllers
import (
"context"
"k8s.io/apimachinery/pkg/runtime"
ctrl "sigs.k8s.io/controller-runtime"
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/log"
batchv1 "tutorial.kubebuilder.io/project/api/v1"
)
接下来,kubebuilder 为我们生成了一个基本的 reconciler 结构。
几乎每个 reconciler 都需要记录日志,并且需要能够获取对象,因此这些都是开箱即用的。
// CronJobReconciler reconciles a CronJob object
type CronJobReconciler struct {
client.Client
Scheme *runtime.Scheme
}
大多数控制器最终都会在集群上运行,因此它们需要 RBAC 权限,我们使用 controller-tools 的 RBAC markers 来指定这些权限。这些是运行所需的最低权限。
随着我们添加更多功能,我们将需要重新审视这些权限。
// +kubebuilder:rbac:groups=batch.tutorial.kubebuilder.io,resources=cronjobs,verbs=get;list;watch;create;update;patch;delete
// +kubebuilder:rbac:groups=batch.tutorial.kubebuilder.io,resources=cronjobs/status,verbs=get;update;patch
ClusterRole
manifest 位于 config/rbac/role.yaml
,通过以下命令使用 controller-gen 从上述标记生成:
// make manifests
注意:如果收到错误,请运行错误中指定的命令,然后重新运行 make manifests
。
Reconcile
实际上执行单个命名对象的对账。
我们的 Request 只有一个名称,但我们可以使用 client 从缓存中获取该对象。
我们返回一个空结果和没有错误,这表示 controller-runtime 我们已成功对账了此对象,并且在有变更之前不需要再次尝试。
大多数控制器需要一个记录句柄和一个上下文,因此我们在这里设置它们。
context 用于允许取消请求,以及可能的跟踪等功能。它是所有 client 方法的第一个参数。Background
上下文只是一个基本上没有任何额外数据或时间限制的上下文。
记录句柄让我们记录日志。controller-runtime 通过一个名为 logr 的库使用结构化日志。很快我们会看到,日志记录通过将键值对附加到静态消息上来实现。我们可以在我们的 reconciler 的顶部预先分配一些键值对,以便将它们附加到此 reconciler 中的所有日志行。
func (r *CronJobReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {
_ = log.FromContext(ctx)
// your logic here
return ctrl.Result{}, nil
}
最后,我们将此 reconciler 添加到 manager 中,以便在启动 manager 时启动它。
目前,我们只指出此 reconciler 作用于 CronJob
。稍后,我们将使用这个来标记我们关心相关的对象。
func (r *CronJobReconciler) SetupWithManager(mgr ctrl.Manager) error {
return ctrl.NewControllerManagedBy(mgr).
For(&batchv1.CronJob{}).
Complete(r)
}
现在我们已经看到了调和器的基本结构,让我们填写 CronJob
的逻辑。
我们的CronJob控制器的基本逻辑如下:
加载指定的CronJob
列出所有活动的作业,并更新状态
根据历史限制清理旧作业
检查我们是否被暂停(如果是,则不执行其他操作)
获取下一个预定运行时间
如果符合预定时间、未超过截止时间,并且不受并发策略阻塞,则运行一个新作业
当我们看到一个正在运行的作业(自动完成)或者到了下一个预定运行时间时,重新排队。
project/internal/controller/cronjob_controller.go
版权所有 2024 Kubernetes 作者。
根据 Apache 许可证 2.0 版(“许可证”)获得许可;
除非符合许可证的规定,否则您不得使用此文件。
您可以在以下网址获取许可证的副本
http://www.apache.org/licenses/LICENSE-2.0
除非适用法律要求或书面同意,否则根据许可证分发的软件
将按“原样“分发,不附带任何明示或暗示的担保或条件。
请参阅许可证以了解特定语言下的权限和限制。
我们将从一些导入开始。您将看到我们需要比为我们自动生成的导入更多的导入。
我们将在使用每个导入时讨论它们。
package controller
import (
"context"
"fmt"
"sort"
"time"
"github.com/robfig/cron"
kbatch "k8s.io/api/batch/v1"
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/runtime"
ref "k8s.io/client-go/tools/reference"
ctrl "sigs.k8s.io/controller-runtime"
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/log"
batchv1 "tutorial.kubebuilder.io/project/api/v1"
)
接下来,我们需要一个时钟,它将允许我们在测试中模拟时间。
// CronJobReconciler 调和 CronJob 对象
type CronJobReconciler struct {
client.Client
Scheme *runtime.Scheme
Clock
}
我们将模拟时钟以便在测试中更容易地跳转时间,“真实“时钟只是调用 time.Now
。
type realClock struct{}
func (_ realClock) Now() time.Time { return time.Now() }
// 时钟知道如何获取当前时间。
// 它可以用于测试中模拟时间。
type Clock interface {
Now() time.Time
}
请注意,我们需要更多的 RBAC 权限 —— 因为我们现在正在创建和管理作业,所以我们需要为这些操作添加权限,
这意味着需要添加一些 标记 。
//+kubebuilder:rbac:groups=batch.tutorial.kubebuilder.io,resources=cronjobs,verbs=get;list;watch;create;update;patch;delete
//+kubebuilder:rbac:groups=batch.tutorial.kubebuilder.io,resources=cronjobs/status,verbs=get;update;patch
//+kubebuilder:rbac:groups=batch.tutorial.kubebuilder.io,resources=cronjobs/finalizers,verbs=update
//+kubebuilder:rbac:groups=batch,resources=jobs,verbs=get;list;watch;create;update;patch;delete
//+kubebuilder:rbac:groups=batch,resources=jobs/status,verbs=get
现在,我们进入控制器的核心——调和逻辑。
var (
scheduledTimeAnnotation = "batch.tutorial.kubebuilder.io/scheduled-at"
)
// Reconcile 是主要的 Kubernetes 调和循环的一部分,旨在将集群的当前状态移动到期望的状态。
// TODO(用户):修改 Reconcile 函数以比较 CronJob 对象指定的状态与实际集群状态,然后执行操作以使集群状态反映用户指定的状态。
//
// 有关更多详细信息,请查看此处的 Reconcile 和其结果:
// - https://pkg.go.dev/sigs.k8s.io/controller-runtime@v0.17.0/pkg/reconcile
func (r *CronJobReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {
log := log.FromContext(ctx)
我们将使用我们的客户端获取 CronJob。所有客户端方法都以上下文(以允许取消)作为它们的第一个参数,
并以对象本身作为它们的最后一个参数。Get 有点特殊,因为它以一个 NamespacedName
作为中间参数(大多数没有中间参数,正如我们将在下面看到的)。
许多客户端方法还在最后接受可变选项。
var cronJob batchv1.CronJob
if err := r.Get(ctx, req.NamespacedName, &cronJob); err != nil {
log.Error(err, "无法获取 CronJob")
// 我们将忽略未找到的错误,因为它们不能通过立即重新排队来修复(我们需要等待新的通知),并且我们可以在删除的请求中得到它们。
return ctrl.Result{}, client.IgnoreNotFound(err)
}
为了完全更新我们的状态,我们需要列出此命名空间中属于此 CronJob 的所有子作业。
类似于 Get,我们可以使用 List 方法列出子作业。请注意,我们使用可变选项设置命名空间和字段匹配(实际上是我们在下面设置的索引查找)。
var childJobs kbatch.JobList
if err := r.List(ctx, &childJobs, client.InNamespace(req.Namespace), client.MatchingFields{jobOwnerKey: req.Name}); err != nil {
log.Error(err, "无法列出子作业")
return ctrl.Result{}, err
}
调解程序获取由 cronjob 拥有的所有作业以获取状态。随着我们的 cronjob 数量的增加,
查找这些作业可能会变得非常慢,因为我们必须对所有作业进行筛选。为了更高效地查找,
这些作业将在控制器的名称上进行本地索引。在缓存作业对象上添加了 jobOwnerKey 字段。
此键引用拥有的控制器,并充当索引。在本文档的后面,我们将配置管理器以实际上索引此字段。
一旦我们拥有所有我们拥有的作业,我们将它们分为活动、成功和失败的作业,并跟踪最近的运行时间,以便我们可以在状态中记录它。
请记住,状态应该能够从世界的状态中重建,因此通常不建议从根对象的状态中读取。相反,您应该在每次运行时重新构建它。这就是我们将在这里做的事情。
我们可以使用状态条件来检查作业是否“完成“,以及它是成功还是失败。我们将把这个逻辑放在一个辅助函数中,使我们的代码更清晰。
// 找到活动作业列表
var activeJobs []*kbatch.Job
var successfulJobs []*kbatch.Job
var failedJobs []*kbatch.Job
var mostRecentTime *time.Time // 找到最近的运行时间,以便我们可以在状态中记录它
我们认为作业“完成“,如果它具有标记为 true 的“Complete“或“Failed“条件。
状态条件允许我们向对象添加可扩展的状态信息,其他人类和控制器可以检查这些信息以检查完成和健康等情况。
isJobFinished := func(job *kbatch.Job) (bool, kbatch.JobConditionType) {
for _, c := range job.Status.Conditions {
if (c.Type == kbatch.JobComplete || c.Type == kbatch.JobFailed) && c.Status == corev1.ConditionTrue {
return true, c.Type
}
}
return false, ""
}
我们将使用一个辅助函数从我们在作业创建时添加的注释中提取预定时间。
getScheduledTimeForJob := func(job *kbatch.Job) (*time.Time, error) {
timeRaw := job.Annotations[scheduledTimeAnnotation]
if len(timeRaw) == 0 {
return nil, nil
}
timeParsed, err := time.Parse(time.RFC3339, timeRaw)
if err != nil {
return nil, err
}
return &timeParsed, nil
}
for i, job := range childJobs.Items {
_, finishedType := isJobFinished(&job)
switch finishedType {
case "": // 进行中
activeJobs = append(activeJobs, &childJobs.Items[i])
case kbatch.JobFailed:
failedJobs = append(failedJobs, &childJobs.Items[i])
case kbatch.JobComplete:
successfulJobs = append(successfulJobs, &childJobs.Items[i])
}
// 我们将在注释中存储启动时间,因此我们将从活动作业中重新构建它。
scheduledTimeForJob, err := getScheduledTimeForJob(&job)
if err != nil {
log.Error(err, "无法解析子作业的计划时间", "job", &job)
continue
}
if scheduledTimeForJob != nil {
if mostRecentTime == nil || mostRecentTime.Before(*scheduledTimeForJob) {
mostRecentTime = scheduledTimeForJob
}
}
}
if mostRecentTime != nil {
cronJob.Status.LastScheduleTime = &metav1.Time{Time: *mostRecentTime}
} else {
cronJob.Status.LastScheduleTime = nil
}
cronJob.Status.Active = nil
for _, activeJob := range activeJobs {
jobRef, err := ref.GetReference(r.Scheme, activeJob)
if err != nil {
log.Error(err, "无法引用活动作业", "job", activeJob)
continue
}
cronJob.Status.Active = append(cronJob.Status.Active, *jobRef)
}
在这里,我们将记录我们观察到的作业数量,以便进行调试。请注意,我们不使用格式字符串,而是使用固定消息,并附加附加信息的键值对。这样可以更容易地过滤和查询日志行。
log.V(1).Info("作业数量", "活动作业", len(activeJobs), "成功的作业", len(successfulJobs), "失败的作业", len(failedJobs))
使用我们收集的数据,我们将更新我们的 CRD 的状态。
就像之前一样,我们使用我们的客户端。为了专门更新状态子资源,我们将使用客户端的 Status
部分,以及 Update
方法。
状态子资源会忽略对 spec 的更改,因此不太可能与任何其他更新冲突,并且可以具有单独的权限。
if err := r.Status().Update(ctx, &cronJob); err != nil {
log.Error(err, "无法更新 CronJob 状态")
return ctrl.Result{}, err
}
一旦我们更新了我们的状态,我们可以继续确保世界的状态与我们在规范中想要的状态匹配。
首先,我们将尝试清理旧作业,以免留下太多作业。
// 注意:删除这些是"尽力而为"的——如果我们在特定的作业上失败,我们不会重新排队只是为了完成删除。
if cronJob.Spec.FailedJobsHistoryLimit != nil {
sort.Slice(failedJobs, func(i, j int) bool {
if failedJobs[i].Status.StartTime == nil {
return failedJobs[j].Status.StartTime != nil
}
return failedJobs[i].Status.StartTime.Before(failedJobs[j].Status.StartTime)
})
for i, job := range failedJobs {
if int32(i) >= int32(len(failedJobs))-*cronJob.Spec.FailedJobsHistoryLimit {
break
}
if err := r.Delete(ctx, job, client.PropagationPolicy(metav1.DeletePropagationBackground)); client.IgnoreNotFound(err) != nil {
log.Error(err, "无法删除旧的失败作业", "job", job)
} else {
log.V(0).Info("已删除旧的失败作业", "job", job)
}
}
}
if cronJob.Spec.SuccessfulJobsHistoryLimit != nil {
sort.Slice(successfulJobs, func(i, j int) bool {
if successfulJobs[i].Status.StartTime == nil {
return successfulJobs[j].Status.StartTime != nil
}
return successfulJobs[i].Status.StartTime.Before(successfulJobs[j].Status.StartTime)
})
for i, job := range successfulJobs {
if int32(i) >= int32(len(successfulJobs))-*cronJob.Spec.SuccessfulJobsHistoryLimit {
break
}
if err := r.Delete(ctx, job, client.PropagationPolicy(metav1.DeletePropagationBackground)); err != nil {
log.Error(err, "无法删除旧的成功作业", "job", job)
} else {
log.V(0).Info("已删除旧的成功作业", "job", job)
}
}
}
如果此对象被暂停,我们不希望运行任何作业,所以我们将立即停止。
如果我们正在运行的作业出现问题,我们希望暂停运行以进行调查或对集群进行操作,而不删除对象,这是很有用的。
if cronJob.Spec.Suspend != nil && *cronJob.Spec.Suspend {
log.V(1).Info("CronJob 已暂停,跳过")
return ctrl.Result{}, nil
}
如果我们没有暂停,我们将需要计算下一个预定运行时间,以及我们是否有一个尚未处理的运行。
我们将使用我们有用的 cron 库来计算下一个预定时间。
我们将从我们的最后一次运行时间开始计算适当的时间,或者如果我们找不到最后一次运行,则从 CronJob 的创建开始计算。
如果错过了太多的运行并且我们没有设置任何截止时间,那么我们将中止,以免在控制器重新启动或发生故障时引起问题。
否则,我们将返回错过的运行(我们将只使用最新的),以及下一个运行,以便我们知道何时再次进行调和。
getNextSchedule := func(cronJob *batchv1.CronJob, now time.Time) (lastMissed time.Time, next time.Time, err error) {
sched, err := cron.ParseStandard(cronJob.Spec.Schedule)
if err != nil {
return time.Time{}, time.Time{}, fmt.Errorf("不可解析的调度 %q:%v", cronJob.Spec.Schedule, err)
}
// 为了优化起见,稍微作弊一下,从我们最后观察到的运行时间开始
// 我们可以在这里重建这个,但是没有什么意义,因为我们刚刚更新了它。
var earliestTime time.Time
if cronJob.Status.LastScheduleTime != nil {
earliestTime = cronJob.Status.LastScheduleTime.Time
} else {
earliestTime = cronJob.ObjectMeta.CreationTimestamp.Time
}
if cronJob.Spec.StartingDeadlineSeconds != nil {
// 控制器将不会在此点以下调度任何内容
schedulingDeadline := now.Add(-time.Second * time.Duration(*cronJob.Spec.StartingDeadlineSeconds))
if schedulingDeadline.After(earliestTime) {
earliestTime = schedulingDeadline
}
}
if earliestTime.After(now) {
return time.Time{}, sched.Next(now), nil
}
starts := 0
// 我们将从最后一次运行时间开始,找到下一个运行时间
for t := sched.Next(earliestTime); !t.After(now); t = sched.Next(t) {
starts++
if starts > 100 {
return time.Time{}, time.Time{}, fmt.Errorf("错过了太多的运行")
}
lastMissed = t
}
return lastMissed, sched.Next(now), nil
}
lastMissed, nextRun, err := getNextSchedule(&cronJob, r.Now())
if err != nil {
log.Error(err, "无法计算下一个运行时间")
return ctrl.Result{}, err
}
最后,我们将创建下一个作业,以便在下一个运行时间触发。
// 我们将创建一个新的作业对象,并设置它的所有者引用以确保我们在删除时正确清理。
newJob := &kbatch.Job{
ObjectMeta: metav1.ObjectMeta{
GenerateName: cronJob.Name + "-",
Namespace: cronJob.Namespace,
OwnerReferences: []metav1.OwnerReference{
*metav1.NewControllerRef(&cronJob, batchv1.SchemeGroupVersion.WithKind("CronJob")),
},
Annotations: map[string]string{
scheduledTimeAnnotation: nextRun.Format(time.RFC3339),
},
},
Spec: cronJob.Spec.JobTemplate.Spec,
}
// 我们将等待我们的作业创建
if err := r.Create(ctx, newJob); err != nil {
log.Error(err, "无法创建作业")
return ctrl.Result{}, err
}
log.V(0).Info("已创建新作业", "job", newJob)
// 我们已经创建了一个新的作业,所以我们将在下一个运行时间重新排队。
return ctrl.Result{RequeueAfter: nextRun.Sub(r.Now())}, nil
}
现在我们已经实现了 CronJobReconciler 的 Reconcile 方法,我们需要在 manager 中注册它。
我们将在 manager 中注册一个新的控制器,用于管理 CronJob 对象。
func (r *CronJobReconciler) SetupWithManager(mgr ctrl.Manager) error {
return ctrl.NewControllerManagedBy(mgr).
For(&batchv1.CronJob{}).
Owns(&kbatch.Job{}).
Complete(r)
}
这是一个复杂的任务,但现在我们有一个可工作的控制器。让我们对集群进行测试,如果没有任何问题,就部署它吧!
但首先,记得我们说过我们会再次回到 main.go
吗?让我们来看看发生了什么变化,以及我们需要添加什么。
project/cmd/main.go
版权所有 2024 年 Kubernetes 作者。
根据 Apache 许可证 2.0 版(“许可证”)获得许可;
除非符合许可证的规定,否则您不得使用此文件。
您可以在以下网址获取许可证的副本:
http://www.apache.org/licenses/LICENSE-2.0
除非适用法律要求或书面同意,根据许可证分发的软件是基于“按原样“的基础分发的,
没有任何明示或暗示的担保或条件。
请查看许可证以了解特定语言管理权限和限制。
package main
import (
"crypto/tls"
"flag"
"os"
// 导入所有 Kubernetes 客户端认证插件(例如 Azure、GCP、OIDC 等)
// 以确保 exec-entrypoint 和 run 可以利用它们。
_ "k8s.io/client-go/plugin/pkg/client/auth"
"k8s.io/apimachinery/pkg/runtime"
utilruntime "k8s.io/apimachinery/pkg/util/runtime"
clientgoscheme "k8s.io/client-go/kubernetes/scheme"
ctrl "sigs.k8s.io/controller-runtime"
"sigs.k8s.io/controller-runtime/pkg/healthz"
"sigs.k8s.io/controller-runtime/pkg/log/zap"
metricsserver "sigs.k8s.io/controller-runtime/pkg/metrics/server"
"sigs.k8s.io/controller-runtime/pkg/webhook"
batchv1 "tutorial.kubebuilder.io/project/api/v1"
"tutorial.kubebuilder.io/project/internal/controller"
//+kubebuilder:scaffold:imports
)
要注意的第一个变化是,kubebuilder 已将新 API 组的包(batchv1
)添加到我们的 scheme 中。
这意味着我们可以在我们的控制器中使用这些对象。
如果我们将使用任何其他 CRD,我们将不得不以相同的方式添加它们的 scheme。
诸如 Job 之类的内置类型通过 clientgoscheme
添加了它们的 scheme。
var (
scheme = runtime.NewScheme()
setupLog = ctrl.Log.WithName("setup")
)
func init() {
utilruntime.Must(clientgoscheme.AddToScheme(scheme))
utilruntime.Must(batchv1.AddToScheme(scheme))
//+kubebuilder:scaffold:scheme
}
另一个发生变化的地方是,kubebuilder 已添加了一个块,调用我们的 CronJob 控制器的 SetupWithManager
方法。
func main() {
var metricsAddr string
var enableLeaderElection bool
var probeAddr string
var secureMetrics bool
var enableHTTP2 bool
flag.StringVar(&metricsAddr, "metrics-bind-address", ":8080", "The address the metric endpoint binds to.")
flag.StringVar(&probeAddr, "health-probe-bind-address", ":8081", "The address the probe endpoint binds to.")
flag.BoolVar(&enableLeaderElection, "leader-elect", false,
"Enable leader election for controller manager. "+
"Enabling this will ensure there is only one active controller manager.")
flag.BoolVar(&secureMetrics, "metrics-secure", false,
"If set the metrics endpoint is served securely")
flag.BoolVar(&enableHTTP2, "enable-http2", false,
"If set, HTTP/2 will be enabled for the metrics and webhook servers")
opts := zap.Options{
Development: true,
}
opts.BindFlags(flag.CommandLine)
flag.Parse()
ctrl.SetLogger(zap.New(zap.UseFlagOptions(&opts)))
// 如果 enable-http2 标志为 false(默认值),则应禁用 http/2
// 由于其漏洞。更具体地说,禁用 http/2 将防止受到 HTTP/2 流取消和
// 快速重置 CVE 的影响。更多信息请参见:
// - https://github.com/advisories/GHSA-qppj-fm5r-hxr3
// - https://github.com/advisories/GHSA-4374-p667-p6c8
disableHTTP2 := func(c *tls.Config) {
setupLog.Info("disabling http/2")
c.NextProtos = []string{"http/1.1"}
}
tlsOpts := []func(*tls.Config){}
if !enableHTTP2 {
tlsOpts = append(tlsOpts, disableHTTP2)
}
webhookServer := webhook.NewServer(webhook.Options{
TLSOpts: tlsOpts,
})
mgr, err := ctrl.NewManager(ctrl.GetConfigOrDie(), ctrl.Options{
Scheme: scheme,
Metrics: metricsserver.Options{
BindAddress: metricsAddr,
SecureServing: secureMetrics,
TLSOpts: tlsOpts,
},
WebhookServer: webhookServer,
HealthProbeBindAddress: probeAddr,
LeaderElection: enableLeaderElection,
LeaderElectionID: "80807133.tutorial.kubebuilder.io",
// LeaderElectionReleaseOnCancel 定义了在 Manager 结束时领导者是否应主动下台
//。这需要二进制文件在 Manager 停止后立即结束,否则,此设置是不安全的。设置这将显著
// 加快自愿领导者过渡的速度,因为新领导者无需等待 LeaseDuration 时间。
//
// 在默认提供的脚手架中,程序在 Manager 停止后立即结束,因此可以启用此选项。
// 但是,如果您正在执行或打算在 Manager 停止后执行任何操作,比如执行清理操作,
// 那么它的使用可能是不安全的。
// LeaderElectionReleaseOnCancel: true,
})
if err != nil {
setupLog.Error(err, "unable to start manager")
os.Exit(1)
}
if err = (&controller.CronJobReconciler{
Client: mgr.GetClient(),
Scheme: mgr.GetScheme(),
}).SetupWithManager(mgr); err != nil {
setupLog.Error(err, "unable to create controller", "controller", "CronJob")
os.Exit(1)
}
我们还将为我们的类型设置 webhooks,接下来我们将讨论它们。
我们只需要将它们添加到 manager 中。由于我们可能希望单独运行 webhooks,
或者在本地测试控制器时不运行它们,我们将它们放在一个环境变量后面。
我们只需确保在本地运行时设置 ENABLE_WEBHOOKS=false
。
if os.Getenv("ENABLE_WEBHOOKS") != "false" {
if err = (&batchv1.CronJob{}).SetupWebhookWithManager(mgr); err != nil {
setupLog.Error(err, "unable to create webhook", "webhook", "CronJob")
os.Exit(1)
}
}
//+kubebuilder:scaffold:builder
if err := mgr.AddHealthzCheck("healthz", healthz.Ping); err != nil {
setupLog.Error(err, "unable to set up health check")
os.Exit(1)
}
if err := mgr.AddReadyzCheck("readyz", healthz.Ping); err != nil {
setupLog.Error(err, "unable to set up ready check")
os.Exit(1)
}
setupLog.Info("starting manager")
if err := mgr.Start(ctrl.SetupSignalHandler()); err != nil {
setupLog.Error(err, "problem running manager")
os.Exit(1)
}
}
现在我们可以实现我们的控制器了。
如果你想为你的 CRD 实现准入 webhook ,你需要做的唯一事情就是实现 Defaulter
和(或)Validator
接口。
Kubebuilder 会为你处理其余工作,比如
创建 webhook 服务器。
确保服务器已添加到 manager 中。
为你的 webhook 创建处理程序。
在服务器中为每个处理程序注册一个路径。
首先,让我们为我们的 CRD(CronJob)生成 webhook 框架。我们需要运行以下命令,带有 --defaulting
和 --programmatic-validation
标志(因为我们的测试项目将使用默认值和验证 webhook):
kubebuilder create webhook --group batch --version v1 --kind CronJob --defaulting --programmatic-validation
这将为你生成 webhook 函数,并在你的 main.go
中为你的 webhook 将其注册到 manager 中。
与你的 Go webhook 实现一起创建的默认 WebhookConfiguration 清单使用 API 版本 v1
。如果你的项目意图支持早于 v1.16 的 Kubernetes 集群版本,请设置 --webhook-version v1beta1
。查看webhook 参考文档 获取更多信息。
project/api/v1/cronjob_webhook.go
版权所有 2024 年 Kubernetes 作者。
根据 Apache 许可证 2.0 版进行许可;
除非符合许可证的规定,否则您不得使用此文件。
您可以在以下网址获取许可证副本:
http://www.apache.org/licenses/LICENSE-2.0
除非适用法律要求或书面同意,否则根据许可证分发的软件
按“原样“分发,没有任何担保或条件,无论是明示的还是暗示的。
请查看许可证以获取特定语言的权限和限制。
package v1
import (
"github.com/robfig/cron"
apierrors "k8s.io/apimachinery/pkg/api/errors"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/runtime/schema"
validationutils "k8s.io/apimachinery/pkg/util/validation"
"k8s.io/apimachinery/pkg/util/validation/field"
ctrl "sigs.k8s.io/controller-runtime"
logf "sigs.k8s.io/controller-runtime/pkg/log"
"sigs.k8s.io/controller-runtime/pkg/webhook"
"sigs.k8s.io/controller-runtime/pkg/webhook/admission"
)
接下来,我们为 Webhook 设置一个日志记录器。
var cronjoblog = logf.Log.WithName("cronjob-resource")
然后,我们使用管理器设置 Webhook。
// SetupWebhookWithManager 将设置管理器以管理 Webhook
func (r *CronJob) SetupWebhookWithManager(mgr ctrl.Manager) error {
return ctrl.NewWebhookManagedBy(mgr).
For(r).
Complete()
}
请注意,我们使用 kubebuilder 标记生成 Webhook 清单。
此标记负责生成一个变更 Webhook 清单。
每个标记的含义可以在这里 找到。
//+kubebuilder:webhook:path=/mutate-batch-tutorial-kubebuilder-io-v1-cronjob,mutating=true,failurePolicy=fail,groups=batch.tutorial.kubebuilder.io,resources=cronjobs,verbs=create;update,versions=v1,name=mcronjob.kb.io,sideEffects=None,admissionReviewVersions=v1
我们使用 webhook.Defaulter
接口为我们的 CRD 设置默认值。
将自动提供一个调用此默认值的 Webhook。
Default
方法应该改变接收器,设置默认值。
var _ webhook.Defaulter = &CronJob{}
// Default 实现了 webhook.Defaulter,因此将为该类型注册 Webhook
func (r *CronJob) Default() {
cronjoblog.Info("默认值", "名称", r.Name)
if r.Spec.ConcurrencyPolicy == "" {
r.Spec.ConcurrencyPolicy = AllowConcurrent
}
if r.Spec.Suspend == nil {
r.Spec.Suspend = new(bool)
}
if r.Spec.SuccessfulJobsHistoryLimit == nil {
r.Spec.SuccessfulJobsHistoryLimit = new(int32)
*r.Spec.SuccessfulJobsHistoryLimit = 3
}
if r.Spec.FailedJobsHistoryLimit == nil {
r.Spec.FailedJobsHistoryLimit = new(int32)
*r.Spec.FailedJobsHistoryLimit = 1
}
}
此标记负责生成一个验证 Webhook 清单。
//+kubebuilder:webhook:verbs=create;update;delete,path=/validate-batch-tutorial-kubebuilder-io-v1-cronjob,mutating=false,failurePolicy=fail,groups=batch.tutorial.kubebuilder.io,resources=cronjobs,versions=v1,name=vcronjob.kb.io,sideEffects=None,admissionReviewVersions=v1
我们可以对我们的 CRD 进行超出声明性验证的验证。
通常,声明性验证应该足够了,但有时更复杂的用例需要复杂的验证。
例如,我们将在下面看到,我们使用此功能来验证格式良好的 cron 调度,而不是编写一个长正则表达式。
如果实现了 webhook.Validator
接口,将自动提供一个调用验证的 Webhook。
ValidateCreate
、ValidateUpdate
和 ValidateDelete
方法预期在创建、更新和删除时验证其接收器。
我们将 ValidateCreate
与 ValidateUpdate
分开,以允许像使某些字段不可变这样的行为,这样它们只能在创建时设置。
我们还将 ValidateDelete
与 ValidateUpdate
分开,以允许在删除时进行不同的验证行为。
在这里,我们只为 ValidateCreate
和 ValidateUpdate
使用相同的共享验证。在 ValidateDelete
中不执行任何操作,因为我们不需要在删除时验证任何内容。
var _ webhook.Validator = &CronJob{}
// ValidateCreate 实现了 webhook.Validator,因此将为该类型注册 Webhook
func (r *CronJob) ValidateCreate() (admission.Warnings, error) {
cronjoblog.Info("验证创建", "名称", r.Name)
return nil, r.validateCronJob()
}
// ValidateUpdate 实现了 webhook.Validator,因此将为该类型注册 Webhook
func (r *CronJob) ValidateUpdate(old runtime.Object) (admission.Warnings, error) {
cronjoblog.Info("验证更新", "名称", r.Name)
return nil, r.validateCronJob()
}
// ValidateDelete 实现了 webhook.Validator,因此将为该类型注册 Webhook
func (r *CronJob) ValidateDelete() (admission.Warnings, error) {
cronjoblog.Info("验证删除", "名称", r.Name)
// TODO(用户):在对象删除时填充您的验证逻辑。
return nil, nil
}
我们验证 CronJob 的名称和规范。
func (r *CronJob) validateCronJob() error {
var allErrs field.ErrorList
if err := r.validateCronJobName(); err != nil {
allErrs = append(allErrs, err)
}
if err := r.validateCronJobSpec(); err != nil {
allErrs = append(allErrs, err)
}
if len(allErrs) == 0 {
return nil
}
return apierrors.NewInvalid(
schema.GroupKind{Group: "batch.tutorial.kubebuilder.io", Kind: "CronJob"},
r.Name, allErrs)
}
一些字段通过 OpenAPI 模式进行声明性验证。
您可以在API 设计 部分找到 kubebuilder 验证标记(以// +kubebuilder:validation
为前缀)。
您可以通过运行controller-gen crd -w
来找到所有 kubebuilder 支持的用于声明验证的标记,
或者在这里 找到它们。
func (r *CronJob) validateCronJobSpec() *field.Error {
// 来自 Kubernetes API 机制的字段助手帮助我们返回结构化良好的验证错误。
return validateScheduleFormat(
r.Spec.Schedule,
field.NewPath("spec").Child("schedule"))
}
我们需要验证 cron 调度是否格式良好。
func validateScheduleFormat(schedule string, fldPath *field.Path) *field.Error {
if _, err := cron.ParseStandard(schedule); err != nil {
return field.Invalid(fldPath, schedule, err.Error())
}
return nil
}
验证字符串字段的长度可以通过验证模式进行声明性验证。
但是,ObjectMeta.Name
字段是在 apimachinery 仓库的一个共享包中定义的,因此我们无法使用验证模式进行声明性验证。
func (r *CronJob) validateCronJobName() *field.Error {
if len(r.ObjectMeta.Name) > validationutils.DNS1035LabelMaxLength-11 {
// 作业名称长度为 63 个字符,与所有 Kubernetes 对象一样(必须适合 DNS 子域)。当创建作业时,cronjob 控制器会在 cronjob 后附加一个 11 个字符的后缀(`-$TIMESTAMP`)。作业名称长度限制为 63 个字符。因此,cronjob 名称长度必须小于等于 63-11=52。如果我们不在这里验证这一点,那么作业创建将在稍后失败。
return field.Invalid(field.NewPath("metadata").Child("name"), r.Name, "必须不超过 52 个字符")
}
return nil
}
如果选择对 API 定义进行任何更改,则在继续之前,可以使用以下命令生成清单,如自定义资源(CRs)或自定义资源定义(CRDs):
make manifests
要测试控制器,请在本地针对集群运行它。
在继续之前,我们需要安装我们的 CRDs,如快速入门 中所述。这将自动使用 controller-tools 更新 YAML 清单(如果需要):
make install
现在我们已经安装了我们的 CRDs,我们可以针对集群运行控制器。这将使用我们连接到集群的任何凭据,因此我们暂时不需要担心 RBAC。
如果要在本地运行 Webhook,您需要为提供 Webhook 生成证书,并将其放在正确的目录下(默认为/tmp/k8s-webhook-server/serving-certs/tls.{crt,key}
)。
如果您没有运行本地 API 服务器,您还需要弄清楚如何将流量从远程集群代理到本地 Webhook 服务器。
因此,通常建议在进行本地代码运行测试循环时禁用 Webhook,如下所示。
在另一个终端中运行
export ENABLE_WEBHOOKS=false
make run
您应该会看到有关控制器启动的日志,但它目前还不会执行任何操作。
此时,我们需要一个 CronJob 进行测试。让我们编写一个样本到 config/samples/batch_v1_cronjob.yaml
,然后使用该样本:
apiVersion: batch.tutorial.kubebuilder.io/v1
kind: CronJob
metadata:
labels:
app.kubernetes.io/name: cronjob
app.kubernetes.io/instance: cronjob-sample
app.kubernetes.io/part-of: project
app.kubernetes.io/managed-by: kustomize
app.kubernetes.io/created-by: project
name: cronjob-sample
spec:
schedule: "*/1 * * * *"
startingDeadlineSeconds: 60
concurrencyPolicy: Allow # explicitly specify, but Allow is also default.
jobTemplate:
spec:
template:
spec:
containers:
- name: hello
image: busybox
args:
- /bin/sh
- -c
- date; echo Hello from the Kubernetes cluster
restartPolicy: OnFailure
kubectl create -f config/samples/batch_v1_cronjob.yaml
此时,您应该会看到大量活动。如果观察更改,您应该会看到您的 CronJob 正在运行,并更新状态:
kubectl get cronjob.batch.tutorial.kubebuilder.io -o yaml
kubectl get job
现在我们知道它正在运行,我们可以在集群中运行它。停止 make run
命令,并运行
make docker-build docker-push IMG=<some-registry>/<project-name>:tag
make deploy IMG=<some-registry>/<project-name>:tag
如果再次列出 CronJob,就像我们之前所做的那样,我们应该看到控制器再次正常运行!
我们建议使用 cert-manager 为 Webhook 服务器提供证书。只要它们将证书放在所需的位置,其他解决方案也应该可以正常工作。
您可以按照 cert-manager 文档 进行安装。
cert-manager 还有一个名为 CA 注入器 的组件,负责将 CA bundle 注入到 MutatingWebhookConfiguration
/ ValidatingWebhookConfiguration
中。
为了实现这一点,您需要在 MutatingWebhookConfiguration
/ ValidatingWebhookConfiguration
对象中使用一个带有键 cert-manager.io/inject-ca-from
的注释。注释的值应该指向一个现有的 证书请求实例 ,格式为 <证书命名空间>/<证书名称>
。
这是我们用于给 MutatingWebhookConfiguration
/ ValidatingWebhookConfiguration
对象添加注释的 kustomize 补丁:
# 这个补丁会向准入 Webhook 配置添加注释
# CERTIFICATE_NAMESPACE 和 CERTIFICATE_NAME 将由 kustomize 替换
apiVersion: admissionregistration.k8s.io/v1
kind: MutatingWebhookConfiguration
metadata:
labels:
app.kubernetes.io/name: mutatingwebhookconfiguration
app.kubernetes.io/instance: mutating-webhook-configuration
app.kubernetes.io/component: webhook
app.kubernetes.io/created-by: project
app.kubernetes.io/part-of: project
app.kubernetes.io/managed-by: kustomize
name: mutating-webhook-configuration
annotations:
cert-manager.io/inject-ca-from: CERTIFICATE_NAMESPACE/CERTIFICATE_NAME
---
apiVersion: admissionregistration.k8s.io/v1
kind: ValidatingWebhookConfiguration
metadata:
labels:
app.kubernetes.io/name: validatingwebhookconfiguration
app.kubernetes.io/instance: validating-webhook-configuration
app.kubernetes.io/component: webhook
app.kubernetes.io/created-by: project
app.kubernetes.io/part-of: project
app.kubernetes.io/managed-by: kustomize
name: validating-webhook-configuration
annotations:
cert-manager.io/inject-ca-from: CERTIFICATE_NAMESPACE/CERTIFICATE_NAME
建议在 kind 集群中开发您的 Webhook,以便快速迭代。
为什么呢?
您可以在本地不到 1 分钟内启动一个多节点集群。
您可以在几秒钟内将其拆除。
您不需要将镜像推送到远程仓库。
您需要按照 这里 的说明安装 cert-manager 捆绑包。
运行以下命令在本地构建您的镜像。
make docker-build docker-push IMG=<some-registry>/<project-name>:tag
如果您使用的是 kind 集群,您不需要将镜像推送到远程容器注册表。您可以直接将本地镜像加载到指定的 kind 集群中:
kind load docker-image <your-image-name>:tag --name <your-kind-cluster-name>
您需要通过 kustomize 启用 Webhook 和 cert manager 配置。
config/default/kustomization.yaml
现在应该如下所示:
# Adds namespace to all resources.
namespace: project-system
# Value of this field is prepended to the
# names of all resources, e.g. a deployment named
# "wordpress" becomes "alices-wordpress".
# Note that it should also match with the prefix (text before '-') of the namespace
# field above.
namePrefix: project-
# Labels to add to all resources and selectors.
#labels:
#- includeSelectors: true
# pairs:
# someName: someValue
resources:
- ../crd
- ../rbac
- ../manager
# [WEBHOOK] To enable webhook, uncomment all the sections with [WEBHOOK] prefix including the one in
# crd/kustomization.yaml
- ../webhook
# [CERTMANAGER] To enable cert-manager, uncomment all sections with 'CERTMANAGER'. 'WEBHOOK' components are required.
- ../certmanager
# [PROMETHEUS] To enable prometheus monitor, uncomment all sections with 'PROMETHEUS'.
- ../prometheus
patches:
# Protect the /metrics endpoint by putting it behind auth.
# If you want your controller-manager to expose the /metrics
# endpoint w/o any authn/z, please comment the following line.
- path: manager_auth_proxy_patch.yaml
# [WEBHOOK] To enable webhook, uncomment all the sections with [WEBHOOK] prefix including the one in
# crd/kustomization.yaml
- path: manager_webhook_patch.yaml
# [CERTMANAGER] To enable cert-manager, uncomment all sections with 'CERTMANAGER'.
# Uncomment 'CERTMANAGER' sections in crd/kustomization.yaml to enable the CA injection in the admission webhooks.
# 'CERTMANAGER' needs to be enabled to use ca injection
- path: webhookcainjection_patch.yaml
# [CERTMANAGER] To enable cert-manager, uncomment all sections with 'CERTMANAGER' prefix.
# Uncomment the following replacements to add the cert-manager CA injection annotations
replacements:
- source: # Add cert-manager annotation to ValidatingWebhookConfiguration, MutatingWebhookConfiguration and CRDs
kind: Certificate
group: cert-manager.io
version: v1
name: serving-cert # this name should match the one in certificate.yaml
fieldPath: .metadata.namespace # namespace of the certificate CR
targets:
- select:
kind: ValidatingWebhookConfiguration
fieldPaths:
- .metadata.annotations.[cert-manager.io/inject-ca-from]
options:
delimiter: '/'
index: 0
create: true
- select:
kind: MutatingWebhookConfiguration
fieldPaths:
- .metadata.annotations.[cert-manager.io/inject-ca-from]
options:
delimiter: '/'
index: 0
create: true
- select:
kind: CustomResourceDefinition
fieldPaths:
- .metadata.annotations.[cert-manager.io/inject-ca-from]
options:
delimiter: '/'
index: 0
create: true
- source:
kind: Certificate
group: cert-manager.io
version: v1
name: serving-cert # this name should match the one in certificate.yaml
fieldPath: .metadata.name
targets:
- select:
kind: ValidatingWebhookConfiguration
fieldPaths:
- .metadata.annotations.[cert-manager.io/inject-ca-from]
options:
delimiter: '/'
index: 1
create: true
- select:
kind: MutatingWebhookConfiguration
fieldPaths:
- .metadata.annotations.[cert-manager.io/inject-ca-from]
options:
delimiter: '/'
index: 1
create: true
- select:
kind: CustomResourceDefinition
fieldPaths:
- .metadata.annotations.[cert-manager.io/inject-ca-from]
options:
delimiter: '/'
index: 1
create: true
- source: # Add cert-manager annotation to the webhook Service
kind: Service
version: v1
name: webhook-service
fieldPath: .metadata.name # namespace of the service
targets:
- select:
kind: Certificate
group: cert-manager.io
version: v1
fieldPaths:
- .spec.dnsNames.0
- .spec.dnsNames.1
options:
delimiter: '.'
index: 0
create: true
- source:
kind: Service
version: v1
name: webhook-service
fieldPath: .metadata.namespace # namespace of the service
targets:
- select:
kind: Certificate
group: cert-manager.io
version: v1
fieldPaths:
- .spec.dnsNames.0
- .spec.dnsNames.1
options:
delimiter: '.'
index: 1
create: true
而 config/crd/kustomization.yaml
现在应该如下所示:
# This kustomization.yaml is not intended to be run by itself,
# since it depends on service name and namespace that are out of this kustomize package.
# It should be run by config/default
resources:
- bases/batch.tutorial.kubebuilder.io_cronjobs.yaml
#+kubebuilder:scaffold:crdkustomizeresource
patches:
# [WEBHOOK] To enable webhook, uncomment all the sections with [WEBHOOK] prefix.
# patches here are for enabling the conversion webhook for each CRD
- path: patches/webhook_in_cronjobs.yaml
#+kubebuilder:scaffold:crdkustomizewebhookpatch
# [CERTMANAGER] To enable cert-manager, uncomment all the sections with [CERTMANAGER] prefix.
# patches here are for enabling the CA injection for each CRD
- path: patches/cainjection_in_cronjobs.yaml
#+kubebuilder:scaffold:crdkustomizecainjectionpatch
# [WEBHOOK] To enable webhook, uncomment the following section
# the following config is for teaching kustomize how to do kustomization for CRDs.
configurations:
- kustomizeconfig.yaml
现在您可以通过以下命令将其部署到集群中:
make deploy IMG=<some-registry>/<project-name>:tag
等待一段时间,直到 Webhook Pod 启动并证书被提供。通常在 1 分钟内完成。
现在您可以创建一个有效的 CronJob 来测试您的 Webhooks。创建应该成功通过。
kubectl create -f config/samples/batch_v1_cronjob.yaml
您还可以尝试创建一个无效的 CronJob(例如,使用格式不正确的 schedule 字段)。您应该看到创建失败并带有验证错误。
如果您为同一集群中的 Pod 部署 Webhook,请注意引导问题,因为 Webhook Pod 的创建请求将被发送到尚未启动的 Webhook Pod 本身。
为使其正常工作,您可以使用 namespaceSelector (如果您的 Kubernetes 版本为 1.9+)或使用 objectSelector (如果您的 Kubernetes 版本为 1.15+)来跳过自身。
测试 Kubernetes 控制器是一个庞大的主题,而 kubebuilder 为您生成的样板测试文件相对较少。
为了引导您了解 Kubebuilder 生成的控制器的集成测试模式,我们将回顾我们在第一个教程中构建的 CronJob,并为其编写一个简单的测试。
基本方法是,在生成的 suite_test.go
文件中,您将使用 envtest 创建一个本地 Kubernetes API 服务器,实例化和运行您的控制器,然后编写额外的 *_test.go
文件使用 Ginkgo 进行测试。
如果您想调整您的 envtest 集群的配置,请参阅 为集成测试配置 envtest 部分以及 envtest 文档
。
../../cronjob-tutorial/testdata/project/internal/controller/suite_test.go
版权所有 2024 年 Kubernetes 作者。
根据 Apache 许可证 2.0 版(“许可证”)许可;
除非符合许可证的规定,否则您不得使用此文件。
您可以在以下网址获取许可证的副本:
http://www.apache.org/licenses/LICENSE-2.0
除非适用法律要求或经书面同意,否则根据许可证分发的软件
按“原样“提供,不附带任何担保或条件,无论是明示的还是暗示的。
请查看许可证以了解特定语言下的权限和限制。
当我们在上一章 中使用 kubebuilder create api
创建 CronJob API 时,Kubebuilder 已经为您做了一些测试工作。
Kubebuilder 生成了一个 internal/controller/suite_test.go
文件,其中包含了设置测试环境的基本内容。
首先,它将包含必要的导入项。
package controller
// 这些测试使用 Ginkgo(BDD 风格的 Go 测试框架)。请参考
// http://onsi.github.io/ginkgo/ 了解更多关于 Ginkgo 的信息。
现在,让我们来看一下生成的代码。
var (
cfg *rest.Config
k8sClient client.Client // 您将在测试中使用此客户端。
testEnv *envtest.Environment
ctx context.Context
cancel context.CancelFunc
)
func TestControllers(t *testing.T) {
RegisterFailHandler(Fail)
RunSpecs(t, "Controller Suite")
}
var _ = BeforeSuite(func() {
// 省略了一些设置代码
})
Kubebuilder 还生成了用于清理 envtest 并在控制器目录中实际运行测试文件的样板函数。
您不需要修改这些函数。
var _ = AfterSuite(func() {
// 省略了一些清理代码
})
现在,您的控制器在测试集群上运行,并且已准备好在您的 CronJob 上执行操作的客户端,我们可以开始编写集成测试了!
../../cronjob-tutorial/testdata/project/internal/controller/cronjob_controller_test.go
根据 Apache 许可证 2.0 版(“许可证”)许可;
除非符合许可证的规定,否则您不得使用此文件。
您可以在以下网址获取许可证的副本:
http://www.apache.org/licenses/LICENSE-2.0
除非适用法律要求或经书面同意,根据许可证分发的软件
按“原样“提供,不附带任何担保或条件,无论是明示的还是暗示的。
请查看许可证以了解特定语言下的权限和限制。
理想情况下,对于每个在 suite_test.go
中调用的控制器,我们应该有一个 <kind>_controller_test.go
。
因此,让我们为 CronJob 控制器编写示例测试(cronjob_controller_test.go
)。
和往常一样,我们从必要的导入项开始。我们还定义了一些实用变量。
package controller
import (
"context"
"reflect"
"time"
batchv1 "k8s.io/api/batch/v1"
v1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/types"
cronjobv1 "tutorial.kubebuilder.io/project/api/v1"
)
编写简单集成测试的第一步是实际创建一个 CronJob 实例,以便对其运行测试。
请注意,要创建 CronJob,您需要创建一个包含您的 CronJob 规范的存根 CronJob 结构。
请注意,当我们创建存根 CronJob 时,CronJob 还需要其所需的下游对象的存根。
如果没有下游的存根 Job 模板规范和下游的 Pod 模板规范,Kubernetes API 将无法创建 CronJob。
var _ = Describe("CronJob controller", func() {
// 为对象名称和测试超时/持续时间和间隔定义实用常量。
const (
CronjobName = "test-cronjob"
CronjobNamespace = "default"
JobName = "test-job"
timeout = time.Second * 10
duration = time.Second * 10
interval = time.Millisecond * 250
)
Context("当更新 CronJob 状态时", func() {
It("当创建新的 Job 时,应增加 CronJob 的 Status.Active 计数", func() {
By("创建一个新的 CronJob")
ctx := context.Background()
cronJob := &cronjobv1.CronJob{
TypeMeta: metav1.TypeMeta{
APIVersion: "batch.tutorial.kubebuilder.io/v1",
Kind: "CronJob",
},
ObjectMeta: metav1.ObjectMeta{
Name: CronjobName,
Namespace: CronjobNamespace,
},
Spec: cronjobv1.CronJobSpec{
Schedule: "1 * * * *",
JobTemplate: batchv1.JobTemplateSpec{
Spec: batchv1.JobSpec{
// 为简单起见,我们只填写了必填字段。
Template: v1.PodTemplateSpec{
Spec: v1.PodSpec{
// 为简单起见,我们只填写了必填字段。
Containers: []v1.Container{
{
Name: "test-container",
Image: "test-image",
},
},
RestartPolicy: v1.RestartPolicyOnFailure,
},
},
},
},
},
}
Expect(k8sClient.Create(ctx, cronJob)).Should(Succeed())
创建完这个 CronJob 后,让我们检查 CronJob 的 Spec 字段是否与我们传入的值匹配。
请注意,由于 k8s apiserver 在我们之前的 Create()
调用后可能尚未完成创建 CronJob,我们将使用 Gomega 的 Eventually() 测试函数,而不是 Expect(),以便让 apiserver 有机会完成创建我们的 CronJob。
Eventually()
将重复运行作为参数提供的函数,直到
(a) 函数的输出与随后的 Should()
调用中的预期值匹配,或者
(b) 尝试次数 * 间隔时间超过提供的超时值。
在下面的示例中,timeout 和 interval 是我们选择的 Go Duration 值。
cronjobLookupKey := types.NamespacedName{Name: CronjobName, Namespace: CronjobNamespace}
createdCronjob := &cronjobv1.CronJob{}
// 我们需要重试获取这个新创建的 CronJob,因为创建可能不会立即发生。
Eventually(func() bool {
err := k8sClient.Get(ctx, cronjobLookupKey, createdCronjob)
return err == nil
}, timeout, interval).Should(BeTrue())
// 让我们确保我们的 Schedule 字符串值被正确转换/处理。
Expect(createdCronjob.Spec.Schedule).Should(Equal("1 * * * *"))
现在我们在测试集群中创建了一个 CronJob,下一步是编写一个测试,实际测试我们的 CronJob 控制器的行为。
让我们测试负责更新 CronJob.Status.Active 以包含正在运行的 Job 的 CronJob 控制器逻辑。
我们将验证当 CronJob 有一个活动的下游 Job 时,其 CronJob.Status.Active 字段包含对该 Job 的引用。
首先,我们应该获取之前创建的测试 CronJob,并验证它当前是否没有任何活动的 Job。
我们在这里使用 Gomega 的 Consistently()
检查,以确保在一段时间内活动的 Job 计数保持为 0。
By("检查 CronJob 是否没有活动的 Jobs")
Consistently(func() (int, error) {
err := k8sClient.Get(ctx, cronjobLookupKey, createdCronjob)
if err != nil {
return -1, err
}
return len(createdCronjob.Status.Active), nil
}, duration, interval).Should(Equal(0))
接下来,我们实际创建一个属于我们的 CronJob 的存根 Job,以及其下游模板规范。
我们将 Job 的状态的 "Active" 计数设置为 2,以模拟 Job 运行两个 Pod,这意味着 Job 正在活动运行。
然后,我们获取存根 Job,并将其所有者引用设置为指向我们的测试 CronJob。
这确保测试 Job 属于我们的测试 CronJob,并由其跟踪。
完成后,我们创建我们的新 Job 实例。
By("创建一个新的 Job")
testJob := &batchv1.Job{
ObjectMeta: metav1.ObjectMeta{
Name: JobName,
Namespace: CronjobNamespace,
},
Spec: batchv1.JobSpec{
Template: v1.PodTemplateSpec{
Spec: v1.PodSpec{
// 为简单起见,我们只填写了必填字段。
Containers: []v1.Container{
{
Name: "test-container",
Image: "test-image",
},
},
RestartPolicy: v1.RestartPolicyOnFailure,
},
},
},
Status: batchv1.JobStatus{
Active: 2,
},
}
// 请注意,设置此所有者引用需要您的 CronJob 的 GroupVersionKind。
kind := reflect.TypeOf(cronjobv1.CronJob{}).Name()
gvk := cronjobv1.GroupVersion.WithKind(kind)
controllerRef := metav1.NewControllerRef(createdCronjob, gvk)
testJob.SetOwnerReferences([]metav1.OwnerReference{*controllerRef})
Expect(k8sClient.Create(ctx, testJob)).Should(Succeed())
将此 Job 添加到我们的测试 CronJob 应该触发我们控制器的协调逻辑。
之后,我们可以编写一个测试,评估我们的控制器是否最终按预期更新我们的 CronJob 的 Status 字段!
By("检查 CronJob 是否有一个活动的 Job")
Eventually(func() ([]string, error) {
err := k8sClient.Get(ctx, cronjobLookupKey, createdCronjob)
if err != nil {
return nil, err
}
names := []string{}
for _, job := range createdCronjob.Status.Active {
names = append(names, job.Name)
}
return names, nil
}, timeout, interval).Should(ConsistOf(JobName), "应在状态的活动作业列表中列出我们的活动作业 %s", JobName)
})
})
})
编写完所有这些代码后,您可以再次在您的 controllers/
目录中运行 go test ./...
来运行您的新测试!
上面的状态更新示例演示了一个用于自定义 Kind 与下游对象的一般测试策略。到目前为止,您希望已经学会了以下测试控制器行为的方法:
设置您的控制器在 envtest 集群上运行
编写用于创建测试对象的存根
隔离对象的更改以测试特定的控制器行为
有更复杂的示例使用 envtest 严格测试控制器行为。示例包括:
到目前为止,我们已经实现了相当全面的CronJob控制器,充分利用了Kubebuilder的大多数功能,并使用envtest为控制器编写了测试。
如果您想了解更多内容,请前往多版本教程 ,了解如何向项目添加新的API版本。
此外,您可以尝试以下步骤:我们将很快在教程中介绍这些内容:
为 kubectl get
命令添加额外的打印列 ,以改善自定义资源在 kubectl get
命令输出中的显示。
希望这次翻译更符合您的需求。
Most projects start out with an alpha API that changes release to release.
However, eventually, most projects will need to move to a more stable API.
Once your API is stable though, you can’t make breaking changes to it.
That’s where API versions come into play.
Let’s make some changes to the CronJob
API spec and make sure all the
different versions are supported by our CronJob project.
If you haven’t already, make sure you’ve gone through the base CronJob
Tutorial .
CRD conversion support was introduced as an alpha feature in Kubernetes
1.13 (which means it’s not on by default, and needs to be enabled via
a feature gate ), and became beta in Kubernetes 1.15
(which means it’s on by default).
If you’re on Kubernetes 1.13-1.14, make sure to enable the feature gate.
If you’re on Kubernetes 1.12 or below, you’ll need a new cluster to use
conversion. Check out the kind instructions for
instructions on how to set up a all-in-one cluster.
Next, let’s figure out what changes we want to make…
A fairly common change in a Kubernetes API is to take some data that used
to be unstructured or stored in some special string format, and change it
to structured data. Our schedule
field fits the bill quite nicely for
this – right now, in v1
, our schedules look like
schedule: "*/1 * * * *"
That’s a pretty textbook example of a special string format (it’s also
pretty unreadable unless you’re a Unix sysadmin).
Let’s make it a bit more structured. According to our CronJob
code , we support “standard” Cron format.
In Kubernetes, all versions must be safely round-tripable through each
other . This means that if we convert from version 1 to version 2, and
then back to version 1, we must not lose information. Thus, any change we
make to our API must be compatible with whatever we supported in v1, and
also need to make sure anything we add in v2 is supported in v1. In some
cases, this means we need to add new fields to v1, but in our case, we
won’t have to, since we’re not adding new functionality.
Keeping all that in mind, let’s convert our example above to be
slightly more structured:
schedule:
minute: */1
Now, at least, we’ve got labels for each of our fields, but we can still
easily support all the different syntax for each field.
We’ll need a new API version for this change. Let’s call it v2:
kubebuilder create api --group batch --version v2 --kind CronJob
Press y
for “Create Resource” and n
for “Create Controller”.
Now, let’s copy over our existing types, and make the change:
project/api/v2/cronjob_types.go
Copyright 2023 The Kubernetes authors.
Licensed under the Apache License, Version 2.0 (the “License”);
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an “AS IS” BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
Since we’re in a v2 package, controller-gen will assume this is for the v2
version automatically. We could override that with the +versionName
marker .
package v2
import (
batchv1 "k8s.io/api/batch/v1"
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
// EDIT THIS FILE! THIS IS SCAFFOLDING FOR YOU TO OWN!
// NOTE: json tags are required. Any new fields you add must have json tags for the fields to be serialized.
We’ll leave our spec largely unchanged, except to change the schedule field to a new type.
// CronJobSpec defines the desired state of CronJob
type CronJobSpec struct {
// The schedule in Cron format, see https://en.wikipedia.org/wiki/Cron.
Schedule CronSchedule `json:"schedule"`
// +kubebuilder:validation:Minimum=0
// Optional deadline in seconds for starting the job if it misses scheduled
// time for any reason. Missed jobs executions will be counted as failed ones.
// +optional
StartingDeadlineSeconds *int64 `json:"startingDeadlineSeconds,omitempty"`
// Specifies how to treat concurrent executions of a Job.
// Valid values are:
// - "Allow" (default): allows CronJobs to run concurrently;
// - "Forbid": forbids concurrent runs, skipping next run if previous run hasn't finished yet;
// - "Replace": cancels currently running job and replaces it with a new one
// +optional
ConcurrencyPolicy ConcurrencyPolicy `json:"concurrencyPolicy,omitempty"`
// This flag tells the controller to suspend subsequent executions, it does
// not apply to already started executions. Defaults to false.
// +optional
Suspend *bool `json:"suspend,omitempty"`
// Specifies the job that will be created when executing a CronJob.
JobTemplate batchv1.JobTemplateSpec `json:"jobTemplate"`
// +kubebuilder:validation:Minimum=0
// The number of successful finished jobs to retain.
// This is a pointer to distinguish between explicit zero and not specified.
// +optional
SuccessfulJobsHistoryLimit *int32 `json:"successfulJobsHistoryLimit,omitempty"`
// +kubebuilder:validation:Minimum=0
// The number of failed finished jobs to retain.
// This is a pointer to distinguish between explicit zero and not specified.
// +optional
FailedJobsHistoryLimit *int32 `json:"failedJobsHistoryLimit,omitempty"`
}
Next, we’ll need to define a type to hold our schedule.
Based on our proposed YAML above, it’ll have a field for
each corresponding Cron “field”.
// describes a Cron schedule.
type CronSchedule struct {
// specifies the minute during which the job executes.
// +optional
Minute *CronField `json:"minute,omitempty"`
// specifies the hour during which the job executes.
// +optional
Hour *CronField `json:"hour,omitempty"`
// specifies the day of the month during which the job executes.
// +optional
DayOfMonth *CronField `json:"dayOfMonth,omitempty"`
// specifies the month during which the job executes.
// +optional
Month *CronField `json:"month,omitempty"`
// specifies the day of the week during which the job executes.
// +optional
DayOfWeek *CronField `json:"dayOfWeek,omitempty"`
}
Finally, we’ll define a wrapper type to represent a field.
We could attach additional validation to this field,
but for now we’ll just use it for documentation purposes.
// represents a Cron field specifier.
type CronField string
All the other types will stay the same as before.
// ConcurrencyPolicy describes how the job will be handled.
// Only one of the following concurrent policies may be specified.
// If none of the following policies is specified, the default one
// is AllowConcurrent.
// +kubebuilder:validation:Enum=Allow;Forbid;Replace
type ConcurrencyPolicy string
const (
// AllowConcurrent allows CronJobs to run concurrently.
AllowConcurrent ConcurrencyPolicy = "Allow"
// ForbidConcurrent forbids concurrent runs, skipping next run if previous
// hasn't finished yet.
ForbidConcurrent ConcurrencyPolicy = "Forbid"
// ReplaceConcurrent cancels currently running job and replaces it with a new one.
ReplaceConcurrent ConcurrencyPolicy = "Replace"
)
// CronJobStatus defines the observed state of CronJob
type CronJobStatus struct {
// INSERT ADDITIONAL STATUS FIELD - define observed state of cluster
// Important: Run "make" to regenerate code after modifying this file
// A list of pointers to currently running jobs.
// +optional
Active []corev1.ObjectReference `json:"active,omitempty"`
// Information when was the last time the job was successfully scheduled.
// +optional
LastScheduleTime *metav1.Time `json:"lastScheduleTime,omitempty"`
}
//+kubebuilder:object:root=true
//+kubebuilder:subresource:status
// CronJob is the Schema for the cronjobs API
type CronJob struct {
metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`
Spec CronJobSpec `json:"spec,omitempty"`
Status CronJobStatus `json:"status,omitempty"`
}
//+kubebuilder:object:root=true
// CronJobList contains a list of CronJob
type CronJobList struct {
metav1.TypeMeta `json:",inline"`
metav1.ListMeta `json:"metadata,omitempty"`
Items []CronJob `json:"items"`
}
func init() {
SchemeBuilder.Register(&CronJob{}, &CronJobList{})
}
project/api/v1/cronjob_types.go
Copyright 2023 The Kubernetes authors.
Licensed under the Apache License, Version 2.0 (the “License”);
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an “AS IS” BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
package v1
import (
batchv1 "k8s.io/api/batch/v1"
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
// EDIT THIS FILE! THIS IS SCAFFOLDING FOR YOU TO OWN!
// NOTE: json tags are required. Any new fields you add must have json tags for the fields to be serialized.
// CronJobSpec defines the desired state of CronJob
type CronJobSpec struct {
// +kubebuilder:validation:MinLength=0
// The schedule in Cron format, see https://en.wikipedia.org/wiki/Cron.
Schedule string `json:"schedule"`
// +kubebuilder:validation:Minimum=0
// Optional deadline in seconds for starting the job if it misses scheduled
// time for any reason. Missed jobs executions will be counted as failed ones.
// +optional
StartingDeadlineSeconds *int64 `json:"startingDeadlineSeconds,omitempty"`
// Specifies how to treat concurrent executions of a Job.
// Valid values are:
// - "Allow" (default): allows CronJobs to run concurrently;
// - "Forbid": forbids concurrent runs, skipping next run if previous run hasn't finished yet;
// - "Replace": cancels currently running job and replaces it with a new one
// +optional
ConcurrencyPolicy ConcurrencyPolicy `json:"concurrencyPolicy,omitempty"`
// This flag tells the controller to suspend subsequent executions, it does
// not apply to already started executions. Defaults to false.
// +optional
Suspend *bool `json:"suspend,omitempty"`
// Specifies the job that will be created when executing a CronJob.
JobTemplate batchv1.JobTemplateSpec `json:"jobTemplate"`
// +kubebuilder:validation:Minimum=0
// The number of successful finished jobs to retain.
// This is a pointer to distinguish between explicit zero and not specified.
// +optional
SuccessfulJobsHistoryLimit *int32 `json:"successfulJobsHistoryLimit,omitempty"`
// +kubebuilder:validation:Minimum=0
// The number of failed finished jobs to retain.
// This is a pointer to distinguish between explicit zero and not specified.
// +optional
FailedJobsHistoryLimit *int32 `json:"failedJobsHistoryLimit,omitempty"`
}
// ConcurrencyPolicy describes how the job will be handled.
// Only one of the following concurrent policies may be specified.
// If none of the following policies is specified, the default one
// is AllowConcurrent.
// +kubebuilder:validation:Enum=Allow;Forbid;Replace
type ConcurrencyPolicy string
const (
// AllowConcurrent allows CronJobs to run concurrently.
AllowConcurrent ConcurrencyPolicy = "Allow"
// ForbidConcurrent forbids concurrent runs, skipping next run if previous
// hasn't finished yet.
ForbidConcurrent ConcurrencyPolicy = "Forbid"
// ReplaceConcurrent cancels currently running job and replaces it with a new one.
ReplaceConcurrent ConcurrencyPolicy = "Replace"
)
// CronJobStatus defines the observed state of CronJob
type CronJobStatus struct {
// INSERT ADDITIONAL STATUS FIELD - define observed state of cluster
// Important: Run "make" to regenerate code after modifying this file
// A list of pointers to currently running jobs.
// +optional
Active []corev1.ObjectReference `json:"active,omitempty"`
// Information when was the last time the job was successfully scheduled.
// +optional
LastScheduleTime *metav1.Time `json:"lastScheduleTime,omitempty"`
}
Since we’ll have more than one version, we’ll need to mark a storage version.
This is the version that the Kubernetes API server uses to store our data.
We’ll chose the v1 version for our project.
We’ll use the +kubebuilder:storageversion
to do this.
Note that multiple versions may exist in storage if they were written before
the storage version changes – changing the storage version only affects how
objects are created/updated after the change.
//+kubebuilder:object:root=true
//+kubebuilder:subresource:status
//+kubebuilder:storageversion
// CronJob is the Schema for the cronjobs API
type CronJob struct {
metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`
Spec CronJobSpec `json:"spec,omitempty"`
Status CronJobStatus `json:"status,omitempty"`
}
//+kubebuilder:object:root=true
// CronJobList contains a list of CronJob
type CronJobList struct {
metav1.TypeMeta `json:",inline"`
metav1.ListMeta `json:"metadata,omitempty"`
Items []CronJob `json:"items"`
}
func init() {
SchemeBuilder.Register(&CronJob{}, &CronJobList{})
}
Now that we’ve got our types in place, we’ll need to set up conversion…
Since we now have two different versions, and users can request either
version, we’ll have to define a way to convert between our version. For
CRDs, this is done using a webhook, similar to the defaulting and
validating webhooks we defined in the base
tutorial . Like before,
controller-runtime will help us wire together the nitty-gritty bits, we
just have to implement the actual conversion.
Before we do that, though, we’ll need to understand how controller-runtime
thinks about versions. Namely:
A simple approach to defining conversion might be to define conversion
functions to convert between each of our versions. Then, whenever we need
to convert, we’d look up the appropriate function, and call it to run the
conversion.
This works fine when we just have two versions, but what if we had
4 types? 8 types? That’d be a lot of conversion functions.
Instead, controller-runtime models conversion in terms of a “hub and
spoke” model – we mark one version as the “hub”, and all other versions
just define conversion to and from the hub:
Then, if we have to convert between two non-hub versions, we first convert
to the hub version, and then to our desired version:
This cuts down on the number of conversion functions that we have to
define, and is modeled off of what Kubernetes does internally.
When API clients, like kubectl or your controller, request a particular
version of your resource, the Kubernetes API server needs to return
a result that’s of that version. However, that version might not match
the version stored by the API server.
In that case, the API server needs to know how to convert between the
desired version and the stored version. Since the conversions aren’t
built in for CRDs, the Kubernetes API server calls out to a webhook to do
the conversion instead. For Kubebuilder, this webhook is implemented by
controller-runtime, and performs the hub-and-spoke conversions that we
discussed above.
Now that we have the model for conversion down pat, we can actually
implement our conversions.
With our model for conversion in place, it’s time to actually implement
the conversion functions. We’ll put them in a file called
cronjob_conversion.go
next to our cronjob_types.go
file, to avoid
cluttering up our main types file with extra functions.
First, we’ll implement the hub. We’ll choose the v1 version as the hub:
project/api/v1/cronjob_conversion.go
Licensed under the Apache License, Version 2.0 (the “License”);
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an “AS IS” BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
package v1
Implementing the hub method is pretty easy – we just have to add an empty
method called Hub()
to serve as a
marker .
We could also just put this inline in our cronjob_types.go
file.
// Hub marks this type as a conversion hub.
func (*CronJob) Hub() {}
Then, we’ll implement our spoke, the v2 version:
project/api/v2/cronjob_conversion.go
Licensed under the Apache License, Version 2.0 (the “License”);
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an “AS IS” BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
package v2
For imports, we’ll need the controller-runtime
conversion
package, plus the API version for our hub type (v1), and finally some of the
standard packages.
import (
"fmt"
"strings"
"sigs.k8s.io/controller-runtime/pkg/conversion"
"tutorial.kubebuilder.io/project/api/v1"
)
Our “spoke” versions need to implement the
Convertible
interface. Namely, they’ll need ConvertTo
and ConvertFrom
methods to convert to/from
the hub version.
ConvertTo is expected to modify its argument to contain the converted object.
Most of the conversion is straightforward copying, except for converting our changed field.
// ConvertTo converts this CronJob to the Hub version (v1).
func (src *CronJob) ConvertTo(dstRaw conversion.Hub) error {
dst := dstRaw.(*v1.CronJob)
sched := src.Spec.Schedule
scheduleParts := []string{"*", "*", "*", "*", "*"}
if sched.Minute != nil {
scheduleParts[0] = string(*sched.Minute)
}
if sched.Hour != nil {
scheduleParts[1] = string(*sched.Hour)
}
if sched.DayOfMonth != nil {
scheduleParts[2] = string(*sched.DayOfMonth)
}
if sched.Month != nil {
scheduleParts[3] = string(*sched.Month)
}
if sched.DayOfWeek != nil {
scheduleParts[4] = string(*sched.DayOfWeek)
}
dst.Spec.Schedule = strings.Join(scheduleParts, " ")
The rest of the conversion is pretty rote.
// ObjectMeta
dst.ObjectMeta = src.ObjectMeta
// Spec
dst.Spec.StartingDeadlineSeconds = src.Spec.StartingDeadlineSeconds
dst.Spec.ConcurrencyPolicy = v1.ConcurrencyPolicy(src.Spec.ConcurrencyPolicy)
dst.Spec.Suspend = src.Spec.Suspend
dst.Spec.JobTemplate = src.Spec.JobTemplate
dst.Spec.SuccessfulJobsHistoryLimit = src.Spec.SuccessfulJobsHistoryLimit
dst.Spec.FailedJobsHistoryLimit = src.Spec.FailedJobsHistoryLimit
// Status
dst.Status.Active = src.Status.Active
dst.Status.LastScheduleTime = src.Status.LastScheduleTime
return nil
}
ConvertFrom is expected to modify its receiver to contain the converted object.
Most of the conversion is straightforward copying, except for converting our changed field.
// ConvertFrom converts from the Hub version (v1) to this version.
func (dst *CronJob) ConvertFrom(srcRaw conversion.Hub) error {
src := srcRaw.(*v1.CronJob)
schedParts := strings.Split(src.Spec.Schedule, " ")
if len(schedParts) != 5 {
return fmt.Errorf("invalid schedule: not a standard 5-field schedule")
}
partIfNeeded := func(raw string) *CronField {
if raw == "*" {
return nil
}
part := CronField(raw)
return &part
}
dst.Spec.Schedule.Minute = partIfNeeded(schedParts[0])
dst.Spec.Schedule.Hour = partIfNeeded(schedParts[1])
dst.Spec.Schedule.DayOfMonth = partIfNeeded(schedParts[2])
dst.Spec.Schedule.Month = partIfNeeded(schedParts[3])
dst.Spec.Schedule.DayOfWeek = partIfNeeded(schedParts[4])
The rest of the conversion is pretty rote.
// ObjectMeta
dst.ObjectMeta = src.ObjectMeta
// Spec
dst.Spec.StartingDeadlineSeconds = src.Spec.StartingDeadlineSeconds
dst.Spec.ConcurrencyPolicy = ConcurrencyPolicy(src.Spec.ConcurrencyPolicy)
dst.Spec.Suspend = src.Spec.Suspend
dst.Spec.JobTemplate = src.Spec.JobTemplate
dst.Spec.SuccessfulJobsHistoryLimit = src.Spec.SuccessfulJobsHistoryLimit
dst.Spec.FailedJobsHistoryLimit = src.Spec.FailedJobsHistoryLimit
// Status
dst.Status.Active = src.Status.Active
dst.Status.LastScheduleTime = src.Status.LastScheduleTime
return nil
}
Now that we’ve got our conversions in place, all that we need to do is
wire up our main to serve the webhook!
Our conversion is in place, so all that’s left is to tell
controller-runtime about our conversion.
Normally, we’d run
kubebuilder create webhook --group batch --version v1 --kind CronJob --conversion
to scaffold out the webhook setup. However, we’ve already got webhook
setup, from when we built our defaulting and validating webhooks!
project/api/v1/cronjob_webhook.go
Copyright 2023 The Kubernetes authors.
Licensed under the Apache License, Version 2.0 (the “License”);
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an “AS IS” BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
package v1
import (
"github.com/robfig/cron"
apierrors "k8s.io/apimachinery/pkg/api/errors"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/runtime/schema"
validationutils "k8s.io/apimachinery/pkg/util/validation"
"k8s.io/apimachinery/pkg/util/validation/field"
ctrl "sigs.k8s.io/controller-runtime"
logf "sigs.k8s.io/controller-runtime/pkg/log"
"sigs.k8s.io/controller-runtime/pkg/webhook"
)
var cronjoblog = logf.Log.WithName("cronjob-resource")
This setup doubles as setup for our conversion webhooks: as long as our
types implement the
Hub and
Convertible
interfaces, a conversion webhook will be registered.
func (r *CronJob) SetupWebhookWithManager(mgr ctrl.Manager) error {
return ctrl.NewWebhookManagedBy(mgr).
For(r).
Complete()
}
// +kubebuilder:webhook:path=/mutate-batch-tutorial-kubebuilder-io-v1-cronjob,mutating=true,failurePolicy=fail,groups=batch.tutorial.kubebuilder.io,resources=cronjobs,verbs=create;update,versions=v1,name=mcronjob.kb.io,sideEffects=None,admissionReviewVersions=v1
var _ webhook.Defaulter = &CronJob{}
// Default implements webhook.Defaulter so a webhook will be registered for the type
func (r *CronJob) Default() {
cronjoblog.Info("default", "name", r.Name)
if r.Spec.ConcurrencyPolicy == "" {
r.Spec.ConcurrencyPolicy = AllowConcurrent
}
if r.Spec.Suspend == nil {
r.Spec.Suspend = new(bool)
}
if r.Spec.SuccessfulJobsHistoryLimit == nil {
r.Spec.SuccessfulJobsHistoryLimit = new(int32)
*r.Spec.SuccessfulJobsHistoryLimit = 3
}
if r.Spec.FailedJobsHistoryLimit == nil {
r.Spec.FailedJobsHistoryLimit = new(int32)
*r.Spec.FailedJobsHistoryLimit = 1
}
}
// +kubebuilder:webhook:verbs=create;update;delete,path=/validate-batch-tutorial-kubebuilder-io-v1-cronjob,mutating=false,failurePolicy=fail,groups=batch.tutorial.kubebuilder.io,resources=cronjobs,versions=v1,name=vcronjob.kb.io,sideEffects=None,admissionReviewVersions=v1
var _ webhook.Validator = &CronJob{}
// ValidateCreate implements webhook.Validator so a webhook will be registered for the type
func (r *CronJob) ValidateCreate() error {
cronjoblog.Info("validate create", "name", r.Name)
return r.validateCronJob()
}
// ValidateUpdate implements webhook.Validator so a webhook will be registered for the type
func (r *CronJob) ValidateUpdate(old runtime.Object) error {
cronjoblog.Info("validate update", "name", r.Name)
return r.validateCronJob()
}
// ValidateDelete implements webhook.Validator so a webhook will be registered for the type
func (r *CronJob) ValidateDelete() error {
cronjoblog.Info("validate delete", "name", r.Name)
// TODO(user): fill in your validation logic upon object deletion.
return nil
}
func (r *CronJob) validateCronJob() error {
var allErrs field.ErrorList
if err := r.validateCronJobName(); err != nil {
allErrs = append(allErrs, err)
}
if err := r.validateCronJobSpec(); err != nil {
allErrs = append(allErrs, err)
}
if len(allErrs) == 0 {
return nil
}
return apierrors.NewInvalid(
schema.GroupKind{Group: "batch.tutorial.kubebuilder.io", Kind: "CronJob"},
r.Name, allErrs)
}
func (r *CronJob) validateCronJobSpec() *field.Error {
// The field helpers from the kubernetes API machinery help us return nicely
// structured validation errors.
return validateScheduleFormat(
r.Spec.Schedule,
field.NewPath("spec").Child("schedule"))
}
func validateScheduleFormat(schedule string, fldPath *field.Path) *field.Error {
if _, err := cron.ParseStandard(schedule); err != nil {
return field.Invalid(fldPath, schedule, err.Error())
}
return nil
}
func (r *CronJob) validateCronJobName() *field.Error {
if len(r.ObjectMeta.Name) > validationutils.DNS1035LabelMaxLength-11 {
// The job name length is 63 character like all Kubernetes objects
// (which must fit in a DNS subdomain). The cronjob controller appends
// a 11-character suffix to the cronjob (`-$TIMESTAMP`) when creating
// a job. The job name length limit is 63 characters. Therefore cronjob
// names must have length <= 63-11=52. If we don't validate this here,
// then job creation will fail later.
return field.Invalid(field.NewPath("metadata").Child("name"), r.Name, "must be no more than 52 characters")
}
return nil
}
Similarly, our existing main file is sufficient:
project/cmd/main.go
Copyright 2023 The Kubernetes authors.
Licensed under the Apache License, Version 2.0 (the “License”);
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an “AS IS” BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
package main
import (
"flag"
"os"
// Import all Kubernetes client auth plugins (e.g. Azure, GCP, OIDC, etc.)
// to ensure that exec-entrypoint and run can make use of them.
_ "k8s.io/client-go/plugin/pkg/client/auth"
kbatchv1 "k8s.io/api/batch/v1"
"k8s.io/apimachinery/pkg/runtime"
utilruntime "k8s.io/apimachinery/pkg/util/runtime"
clientgoscheme "k8s.io/client-go/kubernetes/scheme"
ctrl "sigs.k8s.io/controller-runtime"
"sigs.k8s.io/controller-runtime/pkg/healthz"
"sigs.k8s.io/controller-runtime/pkg/log/zap"
batchv1 "tutorial.kubebuilder.io/project/api/v1"
batchv2 "tutorial.kubebuilder.io/project/api/v2"
"tutorial.kubebuilder.io/project/internal/controller"
//+kubebuilder:scaffold:imports
)
var (
scheme = runtime.NewScheme()
setupLog = ctrl.Log.WithName("setup")
)
func init() {
utilruntime.Must(clientgoscheme.AddToScheme(scheme))
utilruntime.Must(kbatchv1.AddToScheme(scheme)) // we've added this ourselves
utilruntime.Must(batchv1.AddToScheme(scheme))
utilruntime.Must(batchv2.AddToScheme(scheme))
//+kubebuilder:scaffold:scheme
}
func main() {
var metricsAddr string
var enableLeaderElection bool
var probeAddr string
flag.StringVar(&metricsAddr, "metrics-bind-address", ":8080", "The address the metric endpoint binds to.")
flag.StringVar(&probeAddr, "health-probe-bind-address", ":8081", "The address the probe endpoint binds to.")
flag.BoolVar(&enableLeaderElection, "leader-elect", false,
"Enable leader election for controller manager. "+
"Enabling this will ensure there is only one active controller manager.")
opts := zap.Options{
Development: true,
}
opts.BindFlags(flag.CommandLine)
flag.Parse()
ctrl.SetLogger(zap.New(zap.UseFlagOptions(&opts)))
mgr, err := ctrl.NewManager(ctrl.GetConfigOrDie(), ctrl.Options{
Scheme: scheme,
MetricsBindAddress: metricsAddr,
Port: 9443,
HealthProbeBindAddress: probeAddr,
LeaderElection: enableLeaderElection,
LeaderElectionID: "80807133.tutorial.kubebuilder.io",
// LeaderElectionReleaseOnCancel defines if the leader should step down voluntarily
// when the Manager ends. This requires the binary to immediately end when the
// Manager is stopped, otherwise, this setting is unsafe. Setting this significantly
// speeds up voluntary leader transitions as the new leader don't have to wait
// LeaseDuration time first.
//
// In the default scaffold provided, the program ends immediately after
// the manager stops, so would be fine to enable this option. However,
// if you are doing or is intended to do any operation such as perform cleanups
// after the manager stops then its usage might be unsafe.
// LeaderElectionReleaseOnCancel: true,
})
if err != nil {
setupLog.Error(err, "unable to start manager")
os.Exit(1)
}
if err = (&controller.CronJobReconciler{
Client: mgr.GetClient(),
Scheme: mgr.GetScheme(),
}).SetupWithManager(mgr); err != nil {
setupLog.Error(err, "unable to create controller", "controller", "CronJob")
os.Exit(1)
}
Our existing call to SetupWebhookWithManager registers our conversion webhooks with the manager, too.
if os.Getenv("ENABLE_WEBHOOKS") != "false" {
if err = (&batchv1.CronJob{}).SetupWebhookWithManager(mgr); err != nil {
setupLog.Error(err, "unable to create webhook", "webhook", "CronJob")
os.Exit(1)
}
if err = (&batchv2.CronJob{}).SetupWebhookWithManager(mgr); err != nil {
setupLog.Error(err, "unable to create webhook", "webhook", "CronJob")
os.Exit(1)
}
}
//+kubebuilder:scaffold:builder
if err := mgr.AddHealthzCheck("healthz", healthz.Ping); err != nil {
setupLog.Error(err, "unable to set up health check")
os.Exit(1)
}
if err := mgr.AddReadyzCheck("readyz", healthz.Ping); err != nil {
setupLog.Error(err, "unable to set up ready check")
os.Exit(1)
}
setupLog.Info("starting manager")
if err := mgr.Start(ctrl.SetupSignalHandler()); err != nil {
setupLog.Error(err, "problem running manager")
os.Exit(1)
}
}
Everything’s set up and ready to go! All that’s left now is to test out
our webhooks.
Before we can test out our conversion, we’ll need to enable them in our CRD:
Kubebuilder generates Kubernetes manifests under the config
directory with webhook
bits disabled. To enable them, we need to:
Enable patches/webhook_in_<kind>.yaml
and
patches/cainjection_in_<kind>.yaml
in
config/crd/kustomization.yaml
file.
Enable ../certmanager
and ../webhook
directories under the
bases
section in config/default/kustomization.yaml
file.
Enable manager_webhook_patch.yaml
and webhookcainjection_patch.yaml
under the patches
section in config/default/kustomization.yaml
file.
Enable all the vars under the CERTMANAGER
section in
config/default/kustomization.yaml
file.
Additionally, if present in our Makefile, we’ll need to set the CRD_OPTIONS
variable to just
"crd"
, removing the trivialVersions
option (this ensures that we
actually generate validation for each version , instead of
telling Kubernetes that they’re the same):
CRD_OPTIONS ?= "crd"
Now we have all our code changes and manifests in place, so let’s deploy it to
the cluster and test it out.
You’ll need cert-manager installed
(version 0.9.0+
) unless you’ve got some other certificate management
solution. The Kubebuilder team has tested the instructions in this tutorial
with
0.9.0-alpha.0
release.
Once all our ducks are in a row with certificates, we can run make install deploy
(as normal) to deploy all the bits (CRD,
controller-manager deployment) onto the cluster.
Once all of the bits are up and running on the cluster with conversion enabled, we can test out our
conversion by requesting different versions.
We’ll make a v2 version based on our v1 version (put it under config/samples
)
apiVersion: batch.tutorial.kubebuilder.io/v2
kind: CronJob
metadata:
labels:
app.kubernetes.io/name: cronjob
app.kubernetes.io/instance: cronjob-sample
app.kubernetes.io/part-of: project
app.kubernetes.io/managed-by: kustomize
app.kubernetes.io/created-by: project
name: cronjob-sample
spec:
schedule:
minute: "*/1"
startingDeadlineSeconds: 60
concurrencyPolicy: Allow # explicitly specify, but Allow is also default.
jobTemplate:
spec:
template:
spec:
containers:
- name: hello
image: busybox
args:
- /bin/sh
- -c
- date; echo Hello from the Kubernetes cluster
restartPolicy: OnFailure
Then, we can create it on the cluster:
kubectl apply -f config/samples/batch_v2_cronjob.yaml
If we’ve done everything correctly, it should create successfully,
and we should be able to fetch it using both the v2 resource
kubectl get cronjobs.v2.batch.tutorial.kubebuilder.io -o yaml
apiVersion: batch.tutorial.kubebuilder.io/v2
kind: CronJob
metadata:
labels:
app.kubernetes.io/name: cronjob
app.kubernetes.io/instance: cronjob-sample
app.kubernetes.io/part-of: project
app.kubernetes.io/managed-by: kustomize
app.kubernetes.io/created-by: project
name: cronjob-sample
spec:
schedule:
minute: "*/1"
startingDeadlineSeconds: 60
concurrencyPolicy: Allow # explicitly specify, but Allow is also default.
jobTemplate:
spec:
template:
spec:
containers:
- name: hello
image: busybox
args:
- /bin/sh
- -c
- date; echo Hello from the Kubernetes cluster
restartPolicy: OnFailure
and the v1 resource
kubectl get cronjobs.v1.batch.tutorial.kubebuilder.io -o yaml
apiVersion: batch.tutorial.kubebuilder.io/v1
kind: CronJob
metadata:
labels:
app.kubernetes.io/name: cronjob
app.kubernetes.io/instance: cronjob-sample
app.kubernetes.io/part-of: project
app.kubernetes.io/managed-by: kustomize
app.kubernetes.io/created-by: project
name: cronjob-sample
spec:
schedule: "*/1 * * * *"
startingDeadlineSeconds: 60
concurrencyPolicy: Allow # explicitly specify, but Allow is also default.
jobTemplate:
spec:
template:
spec:
containers:
- name: hello
image: busybox
args:
- /bin/sh
- -c
- date; echo Hello from the Kubernetes cluster
restartPolicy: OnFailure
Both should be filled out, and look equivalent to our v2 and v1 samples,
respectively. Notice that each has a different API version.
Finally, if we wait a bit, we should notice that our CronJob continues to
reconcile, even though our controller is written against our v1 API version.
When we access our API types from Go code, we ask for a specific version
by using that version’s Go type (e.g. batchv2.CronJob
).
You might’ve noticed that the above invocations of kubectl looked
a little different from what we usually do – namely, they specify
a group-version-resource , instead of just a resource.
When we write kubectl get cronjob
, kubectl needs to figure out which
group-version-resource that maps to. To do this, it uses the discovery
API to figure out the preferred version of the cronjob
resource. For
CRDs, this is more-or-less the latest stable version (see the CRD
docs for specific details).
With our updates to CronJob, this means that kubectl get cronjob
fetches
the batch/v2
group-version.
If we want to specify an exact version, we can use kubectl get resource.version.group
, as we do above.
You should always use fully-qualified group-version-resource syntax in
scripts . kubectl get resource
is for humans, self-aware robots, and
other sentient beings that can figure out new versions. kubectl get resource.version.group
is for everything else.
steps for troubleshooting
The ComponentConfig has been deprecated in the Controller-Runtime since its version 0.15.0. More info
Moreover, it has undergone breaking changes and is no longer functioning as intended.
As a result, Kubebuilder, which heavily relies on the Controller Runtime, has also deprecated this feature,
no longer guaranteeing its functionality from version 3.11.0 onwards. You can find additional details on this issue here .
Please, be aware that it will force Kubebuilder remove this option soon in future release.
Instead of relying on ComponentConfig, you can now directly utilize manager.Options
to achieve similar configuration capabilities.
Nearly every project that is built for Kubernetes will eventually need to
support passing in additional configurations into the controller. These could
be to enable better logging, turn on/off specific feature gates, set the sync
period, or a myriad of other controls. Previously this was commonly done using
cli flags
that your main.go
would parse to make them accessible within your
program. While this works it’s not a future forward design and the Kubernetes
community has been migrating the core components away from this and toward
using versioned config files, referred to as “component configs”.
The rest of this tutorial will show you how to configure your kubebuilder
project with the component config type then moves on to implementing a custom
type so that you can extend this capability.
The ComponentConfig has been deprecated in the Controller-Runtime since its version 0.15.0. More info
Moreover, it has undergone breaking changes and is no longer functioning as intended.
As a result, Kubebuilder, which heavily relies on the Controller Runtime, has also deprecated this feature,
no longer guaranteeing its functionality from version 3.11.0 onwards. You can find additional details on this issue here .
Please, be aware that it will force Kubebuilder remove this option soon in future release.
This tutorial will show you how to create a custom configuration file for your
project by modifying a project generated with the --component-config
flag
passed to the init
command. The full tutorial’s source can be found
here . Make sure you’ve gone through the installation
steps before continuing.
# we'll use a domain of tutorial.kubebuilder.io,
# so all API groups will be <group>.tutorial.kubebuilder.io.
kubebuilder init --domain tutorial.kubebuilder.io --component-config
If you’ve previously generated a project we can add support for parsing the
config file by making the following changes to main.go
.
First, add a new flag
to specify the path that the component config file
should be loaded from.
var configFile string
flag.StringVar(&configFile, "config", "",
"The controller will load its initial configuration from this file. "+
"Omit this flag to use the default configuration values. "+
"Command-line flags override configuration from this file.")
Now, we can setup the Options
struct and check if the configFile
is set,
this allows backwards compatibility, if it’s set we’ll then use the AndFrom
function on Options
to parse and populate the Options
from the config.
var err error
options := ctrl.Options{Scheme: scheme}
if configFile != "" {
options, err = options.AndFrom(ctrl.ConfigFile().AtPath(configFile))
if err != nil {
setupLog.Error(err, "unable to load the config file")
os.Exit(1)
}
}
If you have previously allowed other flags
like --metrics-bind-addr
or
--enable-leader-election
, you’ll want to set those on the Options
before
loading the config from the file.
Lastly, we’ll change the NewManager
call to use the options
variable we
defined above.
mgr, err := ctrl.NewManager(ctrl.GetConfigOrDie(), options)
With that out of the way, we can get on to defining our new config!
Create the file /config/manager/controller_manager_config.yaml
with the following content:
apiVersion: controller-runtime.sigs.k8s.io/v1alpha1
kind: ControllerManagerConfig
health:
healthProbeBindAddress: :8081
metrics:
bindAddress: 127.0.0.1:8080
webhook:
port: 9443
leaderElection:
leaderElect: true
resourceName: ecaf1259.tutorial.kubebuilder.io
# leaderElectionReleaseOnCancel defines if the leader should step down volume
# when the Manager ends. This requires the binary to immediately end when the
# Manager is stopped, otherwise, this setting is unsafe. Setting this significantly
# speeds up voluntary leader transitions as the new leader don't have to wait
# LeaseDuration time first.
# In the default scaffold provided, the program ends immediately after
# the manager stops, so would be fine to enable this option. However,
# if you are doing or is intended to do any operation such as perform cleanups
# after the manager stops then its usage might be unsafe.
# leaderElectionReleaseOnCancel: true
Update the file /config/manager/kustomization.yaml
by adding at the bottom the following content:
generatorOptions:
disableNameSuffixHash: true
configMapGenerator:
- name: manager-config
files:
- controller_manager_config.yaml
Update the file default/kustomization.yaml
by adding under the patchesStrategicMerge:
key the following patch:
patchesStrategicMerge:
# Mount the controller config file for loading manager configurations
# through a ComponentConfig type
- manager_config_patch.yaml
Update the file default/manager_config_patch.yaml
by adding under the spec:
key the following patch:
spec:
template:
spec:
containers:
- name: manager
args:
- "--config=controller_manager_config.yaml"
volumeMounts:
- name: manager-config
mountPath: /controller_manager_config.yaml
subPath: controller_manager_config.yaml
volumes:
- name: manager-config
configMap:
name: manager-config
The ComponentConfig has been deprecated in the Controller-Runtime since its version 0.15.0. More info
Moreover, it has undergone breaking changes and is no longer functioning as intended.
As a result, Kubebuilder, which heavily relies on the Controller Runtime, has also deprecated this feature,
no longer guaranteeing its functionality from version 3.11.0 onwards. You can find additional details on this issue here .
Please, be aware that it will force Kubebuilder remove this option soon in future release.
Now that you have a component config base project we need to customize the
values that are passed into the controller, to do this we can take a look at
config/manager/controller_manager_config.yaml
.
controller_manager_config.yaml
apiVersion: controller-runtime.sigs.k8s.io/v1alpha1
kind: ControllerManagerConfig
metrics:
bindAddress: 127.0.0.1:8080
webhook:
port: 9443
leaderElection:
leaderElect: true
resourceName: 80807133.tutorial.kubebuilder.io
To see all the available fields you can look at the v1alpha
Controller
Runtime config ControllerManagerConfiguration
The ComponentConfig has been deprecated in the Controller-Runtime since its version 0.15.0. More info
Moreover, it has undergone breaking changes and is no longer functioning as intended.
As a result, Kubebuilder, which heavily relies on the Controller Runtime, has also deprecated this feature,
no longer guaranteeing its functionality from version 3.11.0 onwards. You can find additional details on this issue here .
Please, be aware that it will force Kubebuilder remove this option soon in future release.
If you don’t need to add custom fields to configure your project you can stop
now and move on, if you’d like to be able to pass additional information keep
reading.
If your project needs to accept additional non-controller runtime specific
configurations, e.g. ClusterName
, Region
or anything serializable into
yaml
you can do this by using kubebuilder
to create a new type and then
updating your main.go
to setup the new type for parsing.
The rest of this tutorial will walk through implementing a custom component
config type.
The ComponentConfig has been deprecated in the Controller-Runtime since its version 0.15.0. More info
Moreover, it has undergone breaking changes and is no longer functioning as intended.
As a result, Kubebuilder, which heavily relies on the Controller Runtime, has also deprecated this feature,
no longer guaranteeing its functionality from version 3.11.0 onwards. You can find additional details on this issue here .
Please, be aware that it will force Kubebuilder remove this option soon in future release.
To scaffold out a new config Kind, we can use kubebuilder create api
.
kubebuilder create api --group config --version v2 --kind ProjectConfig --resource --controller=false --make=false
Then, run make build
to implement the interface for your API type, which would generate the file zz_generated.deepcopy.go
.
You may notice this command from the CronJob
tutorial although here we
explicitly setting --controller=false
because ProjectConfig
is not
intended to be an API extension and cannot be reconciled.
This will create a new type file in api/config/v2/
for the ProjectConfig
kind. We’ll need to change this file to embed the
v1alpha1.ControllerManagerConfigurationSpec
projectconfig_types.go
Copyright 2020 The Kubernetes authors.
Licensed under the Apache License, Version 2.0 (the “License”);
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an “AS IS” BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
We start out simply enough: we import the config/v1alpha1
API group, which is
exposed through ControllerRuntime.
package v2
import (
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
cfg "sigs.k8s.io/controller-runtime/pkg/config/v1alpha1"
)
// +kubebuilder:object:root=true
Next, we’ll remove the default ProjectConfigSpec
and ProjectConfigList
then
we’ll embed cfg.ControllerManagerConfigurationSpec
in ProjectConfig
.
// ProjectConfig is the Schema for the projectconfigs API
type ProjectConfig struct {
metav1.TypeMeta `json:",inline"`
// ControllerManagerConfigurationSpec returns the configurations for controllers
cfg.ControllerManagerConfigurationSpec `json:",inline"`
ClusterName string `json:"clusterName,omitempty"`
}
If you haven’t, you’ll also need to remove the ProjectConfigList
from the
SchemeBuilder.Register
.
func init() {
SchemeBuilder.Register(&ProjectConfig{})
}
Lastly, we’ll change the main.go
to reference this type for parsing the file.
The ComponentConfig has been deprecated in the Controller-Runtime since its version 0.15.0. More info
Moreover, it has undergone breaking changes and is no longer functioning as intended.
As a result, Kubebuilder, which heavily relies on the Controller Runtime, has also deprecated this feature,
no longer guaranteeing its functionality from version 3.11.0 onwards. You can find additional details on this issue here .
Please, be aware that it will force Kubebuilder remove this option soon in future release.
Once you have defined your new custom component config type we need to make
sure our new config type has been imported and the types are registered with
the scheme. If you used kubebuilder create api
this should have been
automated.
import (
// ... other imports
configv2 "tutorial.kubebuilder.io/project/apis/config/v2"
// +kubebuilder:scaffold:imports
)
With the package imported we can confirm the types have been added.
func init() {
// ... other scheme registrations
utilruntime.Must(configv2.AddToScheme(scheme))
// +kubebuilder:scaffold:scheme
}
Lastly, we need to change the options parsing in
main.go
to use this new type. To do this we’ll chain OfKind
onto
ctrl.ConfigFile()
and pass in a pointer to the config kind.
var err error
ctrlConfig := configv2.ProjectConfig{}
options := ctrl.Options{Scheme: scheme}
if configFile != "" {
options, err = options.AndFrom(ctrl.ConfigFile().AtPath(configFile).OfKind(&ctrlConfig))
if err != nil {
setupLog.Error(err, "unable to load the config file")
os.Exit(1)
}
}
Now if you need to use the .clusterName
field we defined in our custom kind
you can call ctrlConfig.ClusterName
which will be populated from the config
file supplied.
The ComponentConfig has been deprecated in the Controller-Runtime since its version 0.15.0. More info
Moreover, it has undergone breaking changes and is no longer functioning as intended.
As a result, Kubebuilder, which heavily relies on the Controller Runtime, has also deprecated this feature,
no longer guaranteeing its functionality from version 3.11.0 onwards. You can find additional details on this issue here .
Please, be aware that it will force Kubebuilder remove this option soon in future release.
Now that you have a custom component config we change the
config/manager/controller_manager_config.yaml
to use the new GVK you defined.
project/config/manager/controller_manager_config.yaml
apiVersion: controller-runtime.sigs.k8s.io/v1alpha1
kind: ControllerManagerConfig
metadata:
labels:
app.kubernetes.io/name: controllermanagerconfig
app.kubernetes.io/instance: controller-manager-configuration
app.kubernetes.io/component: manager
app.kubernetes.io/created-by: project
app.kubernetes.io/part-of: project
app.kubernetes.io/managed-by: kustomize
health:
healthProbeBindAddress: :8081
metrics:
bindAddress: 127.0.0.1:8080
webhook:
port: 9443
leaderElection:
leaderElect: true
resourceName: 80807133.tutorial.kubebuilder.io
clusterName: example-test
This type uses the new ProjectConfig
kind under the GVK
config.tutorial.kubebuilder.io/v2
, with these custom configs we can add any
yaml
serializable fields that your controller needs and begin to reduce the
reliance on flags
to configure your project.
Migrating between project structures in Kubebuilder generally involves
a bit of manual work.
This section details what’s required to migrate, between different
versions of Kubebuilder scaffolding, as well as to more complex project
layout structures.
Follow the migration guides from the legacy Kubebuilder versions up the required latest v3x version.
Note that from v3, a new ecosystem using plugins is introduced for better maintainability, reusability and user
experience .
For more info, see the design docs of:
Also, you can check the Plugins section .
This document cover all breaking changes when migrating from v1 to v2.
The details of all changes (breaking or otherwise) can be found in
controller-runtime ,
controller-tools
and kubebuilder
release notes.
V2 project uses go modules. But kubebuilder will continue to support dep
until
go 1.13 is out.
Client.List
now uses functional options (List(ctx, list, ...option)
) instead
of List(ctx, ListOptions, list)
.
Client.DeleteAllOf
was added to the Client
interface.
Metrics are on by default now.
A number of packages under pkg/runtime
have been moved, with their old
locations deprecated. The old locations will be removed before
controller-runtime v1.0.0. See the godocs for more
information.
Automatic certificate generation for webhooks has been removed, and webhooks
will no longer self-register. Use controller-tools to generate a webhook
configuration. If you need certificate generation, we recommend using
cert-manager . Kubebuilder v2 will
scaffold out cert manager configs for you to use – see the
Webhook Tutorial for more details.
The builder
package now has separate builders for controllers and webhooks,
which facilitates choosing which to run.
The generator framework has been rewritten in v2. It still works the same as
before in many cases, but be aware that there are some breaking changes.
Please check marker documentation for more details.
Kubebuilder v2 introduces a simplified project layout. You can find the design
doc here .
In v1, the manager is deployed as a StatefulSet
, while it’s deployed as a
Deployment
in v2.
The kubebuilder create webhook
command was added to scaffold
mutating/validating/conversion webhooks. It replaces the
kubebuilder alpha webhook
command.
v2 uses distroless/static
instead of Ubuntu as base image. This reduces
image size and attack surface.
v2 requires kustomize v3.1.0+.
Make sure you understand the differences between Kubebuilder v1 and v2
before continuing
Please ensure you have followed the installation guide
to install the required components.
The recommended way to migrate a v1 project is to create a new v2 project and
copy over the API and the reconciliation code. The conversion will end up with a
project that looks like a native v2 project. However, in some cases, it’s
possible to do an in-place upgrade (i.e. reuse the v1 project layout, upgrading
controller-runtime and controller-tools.
Let’s take as example an V1 project and migrate it to Kubebuilder
v2. At the end, we should have something that looks like the
example v2 project .
We’ll need to figure out what the group, version, kind and domain are.
Let’s take a look at our current v1 project structure:
pkg/
├── apis
│ ├── addtoscheme_batch_v1.go
│ ├── apis.go
│ └── batch
│ ├── group.go
│ └── v1
│ ├── cronjob_types.go
│ ├── cronjob_types_test.go
│ ├── doc.go
│ ├── register.go
│ ├── v1_suite_test.go
│ └── zz_generated.deepcopy.go
├── controller
└── webhook
All of our API information is stored in pkg/apis/batch
, so we can look
there to find what we need to know.
In cronjob_types.go
, we can find
type CronJob struct {...}
In register.go
, we can find
SchemeGroupVersion = schema.GroupVersion{Group: "batch.tutorial.kubebuilder.io", Version: "v1"}
Putting that together, we get CronJob
as the kind, and batch.tutorial.kubebuilder.io/v1
as the group-version
Now, we need to initialize a v2 project. Before we do that, though, we’ll need
to initialize a new go module if we’re not on the gopath
:
go mod init tutorial.kubebuilder.io/project
Then, we can finish initializing the project with kubebuilder:
kubebuilder init --domain tutorial.kubebuilder.io
Next, we’ll re-scaffold out the API types and controllers. Since we want both,
we’ll say yes to both the API and controller prompts when asked what parts we
want to scaffold:
kubebuilder create api --group batch --version v1 --kind CronJob
If you’re using multiple groups, some manual work is required to migrate.
Please follow this for more details.
Now, let’s copy the API definition from pkg/apis/batch/v1/cronjob_types.go
to
api/v1/cronjob_types.go
. We only need to copy the implementation of the Spec
and Status
fields.
We can replace the +k8s:deepcopy-gen:interfaces=...
marker (which is
deprecated in kubebuilder ) with
+kubebuilder:object:root=true
.
We don’t need the following markers any more (they’re not used anymore, and are
relics from much older versions of Kubebuilder):
// +genclient
// +k8s:openapi-gen=true
Our API types should look like the following:
// +kubebuilder:object:root=true
// +kubebuilder:subresource:status
// CronJob is the Schema for the cronjobs API
type CronJob struct {...}
// +kubebuilder:object:root=true
// CronJobList contains a list of CronJob
type CronJobList struct {...}
Now, let’s migrate the controller reconciler code from
pkg/controller/cronjob/cronjob_controller.go
to
controllers/cronjob_controller.go
.
We’ll need to copy
the fields from the ReconcileCronJob
struct to CronJobReconciler
the contents of the Reconcile
function
the rbac related markers to the new file.
the code under func add(mgr manager.Manager, r reconcile.Reconciler) error
to func SetupWithManager
If you don’t have a webhook, you can skip this section.
If you are using webhooks for Kubernetes core types (e.g. Pods), or for an
external CRD that is not owned by you, you can refer the
controller-runtime example for builtin types
and do something similar. Kubebuilder doesn’t scaffold much for these cases, but
you can use the library in controller-runtime.
Now let’s scaffold the webhooks for our CRD (CronJob). We’ll need to run the
following command with the --defaulting
and --programmatic-validation
flags
(since our test project uses defaulting and validating webhooks):
kubebuilder create webhook --group batch --version v1 --kind CronJob --defaulting --programmatic-validation
Depending on how many CRDs need webhooks, we may need to run the above command
multiple times with different Group-Version-Kinds.
Now, we’ll need to copy the logic for each webhook. For validating webhooks, we
can copy the contents from
func validatingCronJobFn
in pkg/default_server/cronjob/validating/cronjob_create_handler.go
to func ValidateCreate
in api/v1/cronjob_webhook.go
and then the same for update
.
Similarly, we’ll copy from func mutatingCronJobFn
to func Default
.
When scaffolding webhooks, Kubebuilder v2 adds the following markers:
// These are v2 markers
// This is for the mutating webhook
// +kubebuilder:webhook:path=/mutate-batch-tutorial-kubebuilder-io-v1-cronjob,mutating=true,failurePolicy=fail,groups=batch.tutorial.kubebuilder.io,resources=cronjobs,verbs=create;update,versions=v1,name=mcronjob.kb.io
...
// This is for the validating webhook
// +kubebuilder:webhook:path=/validate-batch-tutorial-kubebuilder-io-v1-cronjob,mutating=false,failurePolicy=fail,groups=batch.tutorial.kubebuilder.io,resources=cronjobs,verbs=create;update,versions=v1,name=vcronjob.kb.io
The default verbs are verbs=create;update
. We need to ensure verbs
matches
what we need. For example, if we only want to validate creation, then we would
change it to verbs=create
.
We also need to ensure failure-policy
is still the same.
Markers like the following are no longer needed (since they deal with
self-deploying certificate configuration, which was removed in v2):
// v1 markers
// +kubebuilder:webhook:port=9876,cert-dir=/tmp/cert
// +kubebuilder:webhook:service=test-system:webhook-service,selector=app:webhook-server
// +kubebuilder:webhook:secret=test-system:webhook-server-secret
// +kubebuilder:webhook:mutating-webhook-config-name=test-mutating-webhook-cfg
// +kubebuilder:webhook:validating-webhook-config-name=test-validating-webhook-cfg
In v1, a single webhook marker may be split into multiple ones in the same
paragraph. In v2, each webhook must be represented by a single marker.
If there are any manual updates in main.go
in v1, we need to port the changes
to the new main.go
. We’ll also need to ensure all of the needed schemes have
been registered.
If there are additional manifests added under config
directory, port them as
well.
Change the image name in the Makefile if needed.
Finally, we can run make
and make docker-build
to ensure things are working
fine.
This document covers all breaking changes when migrating from v2 to v3.
The details of all changes (breaking or otherwise) can be found in
controller-runtime ,
controller-tools
and kb-releases release notes.
v3 projects use Go modules and request Go 1.18+. Dep is no longer supported for dependency management.
Preliminary support for plugins was added. For more info see the Extensible CLI and Scaffolding Plugins: phase 1 ,
the Extensible CLI and Scaffolding Plugins: phase 1.5 and the Extensible CLI and Scaffolding Plugins - Phase 2
design docs. Also, you can check the Plugins section .
The PROJECT
file now has a new layout. It stores more information about what resources are in use, to better enable plugins to make useful decisions when scaffolding.
Furthermore, the PROJECT file itself is now versioned: the version
field corresponds to the version of the PROJECT file itself, while the layout
field indicates the scaffolding & primary plugin version in use.
The version of the image gcr.io/kubebuilder/kube-rbac-proxy
, which is an optional component enabled by default to secure the request made against the manager, was updated from 0.5.0
to 0.11.0
to address security concerns. The details of all changes can be found in kube-rbac-proxy .
More details on this can be found at here , but for the highlights, check below
Projects scaffolded with Kubebuilder v3 will use the `go.kubebuilder.io/v3` plugin by default.
After using the CLI to create your project, you are free to customise how you see fit. Bear in mind, that it is not recommended to deviate from the proposed layout unless you know what you are doing.
For example, you should refrain from moving the scaffolded files, doing so will make it difficult in upgrading your project in the future. You may also lose the ability to use some of the CLI features and helpers. For further information on the project layout, see the doc What’s in a basic project?
So you want to upgrade your scaffolding to use the latest and greatest features then, follow up the following guide which will cover the steps in the most straightforward way to allow you to upgrade your project to get all latest changes and improvements.
The current scaffold done by the CLI (go/v3
) uses kubernetes-sigs/kustomize v3 which does not provide
a valid binary for Apple Silicon (darwin/arm64
). Therefore, you can use the go/v4
plugin
instead which provides support for this platform:
kubebuilder init --domain my.domain --repo my.domain/guestbook --plugins=go/v4
So you want to use the latest version of Kubebuilder CLI without changing your scaffolding then, check the following guide which will describe the manually steps required for you to upgrade only your PROJECT version and starts to use the plugins versions.
This way is more complex, susceptible to errors, and success cannot be assured. Also, by following these steps you will not get the improvements and bug fixes in the default generated project files.
You will check that you can still using the previous layout by using the go/v2
plugin which will not upgrade the controller-runtime and controller-tools to the latest version used with go/v3
becuase of its breaking changes. By checking this guide you know also how to manually change the files to use the go/v3
plugin and its dependencies versions.
Make sure you understand the differences between Kubebuilder v2 and v3
before continuing.
Please ensure you have followed the installation guide
to install the required components.
The recommended way to migrate a v2 project is to create a new v3 project and
copy over the API and the reconciliation code. The conversion will end up with a
project that looks like a native v3 project. However, in some cases, it’s
possible to do an in-place upgrade (i.e. reuse the v2 project layout, upgrading
controller-runtime and controller-tools ).
For the rest of this document, we are going to use migration-project
as the project name and tutorial.kubebuilder.io
as the domain. Please, select and use appropriate values for your case.
Create a new directory with the name of your project. Note that
this name is used in the scaffolds to create the name of your manager Pod and of the Namespace where the Manager is deployed by default.
$ mkdir migration-project-name
$ cd migration-project-name
Now, we need to initialize a v3 project. Before we do that, though, we’ll need
to initialize a new go module if we’re not on the GOPATH
. While technically this is
not needed inside GOPATH
, it is still recommended.
go mod init tutorial.kubebuilder.io/migration-project
Then, we can finish initializing the project with kubebuilder.
kubebuilder init --domain tutorial.kubebuilder.io
Next, we’ll re-scaffold out the API types and controllers.
For this example, we are going to consider that we need to scaffold both the API types and the controllers, but remember that this depends on how you scaffolded them in your original project.
kubebuilder create api --group batch --version v1 --kind CronJob
From now on, the CRDs that will be created by controller-gen will be using the Kubernetes API version apiextensions.k8s.io/v1
by default, instead of apiextensions.k8s.io/v1beta1
.
The apiextensions.k8s.io/v1beta1
was deprecated in Kubernetes 1.16
and was removed in Kubernetes 1.22
.
So, if you would like to keep using the previous version use the flag --crd-version=v1beta1
in the above command which is only needed if you want your operator to support Kubernetes 1.15
and earlier.
However, it is no longer recommended.
Please run kubebuilder edit --multigroup=true
to enable multi-group support before migrating the APIs and controllers. Please see this for more details.
Now, let’s copy the API definition from api/v1/<kind>_types.go
in our old project to the new one.
These files have not been modified by the new plugin, so you should be able to replace your freshly scaffolded files by your old one. There may be some cosmetic changes. So you can choose to only copy the types themselves.
Now, let’s migrate the controller code from controllers/cronjob_controller.go
in our old project to the new one. There is a breaking change and there may be some cosmetic changes.
The new Reconcile
method receives the context as an argument now, instead of having to create it with context.Background()
. You can copy the rest of the code in your old controller to the scaffolded methods replacing:
func (r *CronJobReconciler) Reconcile(req ctrl.Request) (ctrl.Result, error) {
ctx := context.Background()
log := r.Log.WithValues("cronjob", req.NamespacedName)
With:
func (r *CronJobReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {
log := r.Log.WithValues("cronjob", req.NamespacedName)
If you don’t have any webhooks, you can skip this section.
Now let’s scaffold the webhooks for our CRD (CronJob). We’ll need to run the
following command with the --defaulting
and --programmatic-validation
flags
(since our test project uses defaulting and validating webhooks):
kubebuilder create webhook --group batch --version v1 --kind CronJob --defaulting --programmatic-validation
From now on, the Webhooks that will be created by Kubebuilder using by default the Kubernetes API version admissionregistration.k8s.io/v1
instead of admissionregistration.k8s.io/v1beta1
and the cert-manager.io/v1
to replace cert-manager.io/v1alpha2
.
Note that apiextensions/v1beta1
and admissionregistration.k8s.io/v1beta1
were deprecated in Kubernetes 1.16
and will be removed in Kubernetes 1.22
. If you use apiextensions/v1
and admissionregistration.k8s.io/v1
then you need to use cert-manager.io/v1
which will be the API adopted per Kubebuilder CLI by default in this case.
The API cert-manager.io/v1alpha2
is not compatible with the latest Kubernetes API versions.
So, if you would like to keep using the previous version use the flag --webhook-version=v1beta1
in the above command which is only needed if you want your operator to support Kubernetes 1.15
and earlier.
Now, let’s copy the webhook definition from api/v1/<kind>_webhook.go
from our old project to the new one.
If there are any manual updates in main.go
in v2, we need to port the changes to the new main.go
. We’ll also need to ensure all of the needed schemes have been registered.
If there are additional manifests added under config directory, port them as well.
Change the image name in the Makefile if needed.
Finally, we can run make
and make docker-build
to ensure things are working
fine.
Make sure you understand the differences between Kubebuilder v2 and v3
before continuing
Please ensure you have followed the installation guide
to install the required components.
The following guide describes the manual steps required to upgrade your config version and start using the plugin-enabled version.
This way is more complex, susceptible to errors, and success cannot be assured. Also, by following these steps you will not get the improvements and bug fixes in the default generated project files.
Usually you will only try to do it manually if you customized your project and deviated too much from the proposed scaffold. Before continuing, ensure that you understand the note about project customizations . Note that you might need to spend more effort to do this process manually than organize your project customizations to follow up the proposed layout and keep your project maintainable and upgradable with less effort in the future.
The recommended upgrade approach is to follow the Migration Guide v2 to V3 instead.
Migrating between project configuration versions involves additions, removals, and/or changes
to fields in your project’s PROJECT
file, which is created by running the init
command.
The PROJECT
file now has a new layout. It stores more information about what resources are in use, to better enable plugins to make useful decisions when scaffolding.
Furthermore, the PROJECT
file itself is now versioned. The version
field corresponds to the version of the PROJECT
file itself, while the layout
field indicates the scaffolding and the primary plugin version in use.
The following steps describe the manual changes required to bring the project configuration file (PROJECT
). These change will add the information that Kubebuilder would add when generating the file. This file can be found in the root directory.
The project name is the name of the project directory in lowercase:
...
projectName: example
...
The default plugin layout which is equivalent to the previous version is go.kubebuilder.io/v2
:
...
layout:
- go.kubebuilder.io/v2
...
The version
field represents the version of project’s layout. Update this to "3"
:
...
version: "3"
...
The attribute resources
represents the list of resources scaffolded in your project.
You will need to add the following data for each resource added to the project.
...
resources:
- api:
...
crdVersion: v1beta1
domain: my.domain
group: webapp
kind: Guestbook
...
...
resources:
- api:
...
namespaced: true
group: webapp
kind: Guestbook
...
...
resources:
- api:
...
controller: true
group: webapp
kind: Guestbook
...
resources:
- api:
...
domain: testproject.org
group: webapp
kind: Guestbook
Kubebuilder only supports core types and the APIs scaffolded in the project by default unless you manually change the files you will be unable to work with external-types.
For core types, the domain value will be k8s.io
or empty.
However, for an external-type you might leave this attribute empty. We cannot suggest what would be the best approach in this case until it become officially supported by the tool. For further information check the issue #1999 .
Note that you will only need to add the domain
if your project has a scaffold for a core type API which the Domain
value is not empty in Kubernetes API group qualified scheme definition. (For example, see here that for Kinds from the API apps
it has not a domain when see here that for Kinds from the API authentication
its domain is k8s.io
)
Check the following the list to know the core types supported and its domain:
Core Type Domain
admission “k8s.io”
admissionregistration “k8s.io”
apps empty
auditregistration “k8s.io”
apiextensions “k8s.io”
authentication “k8s.io”
authorization “k8s.io”
autoscaling empty
batch empty
certificates “k8s.io”
coordination “k8s.io”
core empty
events “k8s.io”
extensions empty
imagepolicy “k8s.io”
networking “k8s.io”
node “k8s.io”
metrics “k8s.io”
policy empty
rbac.authorization “k8s.io”
scheduling “k8s.io”
setting “k8s.io”
storage “k8s.io”
Following an example where a controller was scaffold for the core type Kind Deployment via the command create api --group apps --version v1 --kind Deployment --controller=true --resource=false --make=false
:
- controller: true
group: apps
kind: Deployment
path: k8s.io/api/apps/v1
version: v1
If you did not scaffold an API but only generate a controller for the API(GKV) informed then, you do not need to add the path. Note, that it usually happens when you add a controller for an external or core type.
Kubebuilder only supports core types and the APIs scaffolded in the project by default unless you manually change the files you will be unable to work with external-types.
The path will always be the import path used in your Go files to use the API.
...
resources:
- api:
...
...
group: webapp
kind: Guestbook
path: example/api/v1
The valid types are: defaulting
, validation
and conversion
. Use the webhook type used to scaffold the project.
The Kubernetes API version used to do the webhooks scaffolds in Kubebuilder v2
is v1beta1
. Then, you will add the webhookVersion: v1beta1
for all cases.
resources:
- api:
...
...
group: webapp
kind: Guestbook
webhooks:
defaulting: true
validation: true
webhookVersion: v1beta1
Now ensure that your PROJECT
file has the same information when the manifests are generated via Kubebuilder V3 CLI.
For the QuickStart example, the PROJECT
file manually updated to use go.kubebuilder.io/v2
would look like:
domain: my.domain
layout:
- go.kubebuilder.io/v2
projectName: example
repo: example
resources:
- api:
crdVersion: v1
namespaced: true
controller: true
domain: my.domain
group: webapp
kind: Guestbook
path: example/api/v1
version: v1
version: "3"
You can check the differences between the previous layout(version 2
) and the current format(version 3
) with the go.kubebuilder.io/v2
by comparing an example scenario which involves more than one API and webhook, see:
Example (Project version 2)
domain: testproject.org
repo: sigs.k8s.io/kubebuilder/example
resources:
- group: crew
kind: Captain
version: v1
- group: crew
kind: FirstMate
version: v1
- group: crew
kind: Admiral
version: v1
version: "2"
Example (Project version 3)
domain: testproject.org
layout:
- go.kubebuilder.io/v2
projectName: example
repo: sigs.k8s.io/kubebuilder/example
resources:
- api:
crdVersion: v1
namespaced: true
controller: true
domain: testproject.org
group: crew
kind: Captain
path: example/api/v1
version: v1
webhooks:
defaulting: true
validation: true
webhookVersion: v1
- api:
crdVersion: v1
namespaced: true
controller: true
domain: testproject.org
group: crew
kind: FirstMate
path: example/api/v1
version: v1
webhooks:
conversion: true
webhookVersion: v1
- api:
crdVersion: v1
controller: true
domain: testproject.org
group: crew
kind: Admiral
path: example/api/v1
plural: admirales
version: v1
webhooks:
defaulting: true
webhookVersion: v1
version: "3"
In the steps above, you updated only the PROJECT
file which represents the project configuration. This configuration is useful only for the CLI tool. It should not affect how your project behaves.
There is no option to verify that you properly updated the configuration file. The best way to ensure the configuration file has the correct V3+
fields is to initialize a project with the same API(s), controller(s), and webhook(s) in order to compare generated configuration with the manually changed configuration.
If you made mistakes in the above process, you will likely face issues using the CLI.
Migrating between project plugins involves additions, removals, and/or changes
to files created by any plugin-supported command, e.g. init
and create
. A plugin supports
one or more project config versions; make sure you upgrade your project’s
config version to the latest supported by your target plugin version before upgrading plugin versions.
The following steps describe the manual changes required to modify the project’s layout enabling your project to use the go/v3
plugin. These steps will not help you address all the bug fixes of the already generated scaffolds.
The following steps will not migrate the API versions which are deprecated apiextensions.k8s.io/v1beta1
, admissionregistration.k8s.io/v1beta1
, cert-manager.io/v1alpha2
.
Before updating the layout
, please ensure you have followed the above steps to upgrade your Project version to 3
. Once you have upgraded the project version, update the layout
to the new plugin version go.kubebuilder.io/v3
as follows:
domain: my.domain
layout:
- go.kubebuilder.io/v3
...
Ensure that your go.mod
is using Go version 1.15
and the following dependency versions:
module example
go 1.18
require (
github.com/onsi/ginkgo/v2 v2.1.4
github.com/onsi/gomega v1.19.0
k8s.io/api v0.24.0
k8s.io/apimachinery v0.24.0
k8s.io/client-go v0.24.0
sigs.k8s.io/controller-runtime v0.12.1
)
In the Dockerfile, replace:
# Build the manager binary
FROM golang:1.13 as builder
With:
# Build the manager binary
FROM golang:1.16 as builder
To allow controller-gen
and the scaffolding tool to use the new API versions, replace:
CRD_OPTIONS ?= "crd:trivialVersions=true"
With:
CRD_OPTIONS ?= "crd"
To allow downloading the newer versions of the Kubernetes binaries required by Envtest into the testbin/
directory of your project instead of the global setup, replace:
# Run tests
test: generate fmt vet manifests
go test ./... -coverprofile cover.out
With:
# Setting SHELL to bash allows bash commands to be executed by recipes.
# Options are set to exit when a recipe line exits non-zero or a piped command fails.
SHELL = /usr/bin/env bash -o pipefail
.SHELLFLAGS = -ec
ENVTEST_ASSETS_DIR=$(shell pwd)/testbin
test: manifests generate fmt vet ## Run tests.
mkdir -p ${ENVTEST_ASSETS_DIR}
test -f ${ENVTEST_ASSETS_DIR}/setup-envtest.sh || curl -sSLo ${ENVTEST_ASSETS_DIR}/setup-envtest.sh https://raw.githubusercontent.com/kubernetes-sigs/controller-runtime/v0.8.3/hack/setup-envtest.sh
source ${ENVTEST_ASSETS_DIR}/setup-envtest.sh; fetch_envtest_tools $(ENVTEST_ASSETS_DIR); setup_envtest_env $(ENVTEST_ASSETS_DIR); go test ./... -coverprofile cover.out
The Kubernetes binaries that are required for the Envtest were upgraded from 1.16.4
to 1.22.1
.
You can still install them globally by following these installation instructions .
To upgrade the controller-gen
and kustomize
version used to generate the manifests replace:
# find or download controller-gen
# download controller-gen if necessary
controller-gen:
ifeq (, $(shell which controller-gen))
@{ \
set -e ;\
CONTROLLER_GEN_TMP_DIR=$$(mktemp -d) ;\
cd $$CONTROLLER_GEN_TMP_DIR ;\
go mod init tmp ;\
go get sigs.k8s.io/controller-tools/cmd/controller-gen@v0.2.5 ;\
rm -rf $$CONTROLLER_GEN_TMP_DIR ;\
}
CONTROLLER_GEN=$(GOBIN)/controller-gen
else
CONTROLLER_GEN=$(shell which controller-gen)
endif
With:
##@ Build Dependencies
## Location to install dependencies to
LOCALBIN ?= $(shell pwd)/bin
$(LOCALBIN):
mkdir -p $(LOCALBIN)
## Tool Binaries
KUSTOMIZE ?= $(LOCALBIN)/kustomize
CONTROLLER_GEN ?= $(LOCALBIN)/controller-gen
ENVTEST ?= $(LOCALBIN)/setup-envtest
## Tool Versions
KUSTOMIZE_VERSION ?= v3.8.7
CONTROLLER_TOOLS_VERSION ?= v0.9.0
KUSTOMIZE_INSTALL_SCRIPT ?= "https://raw.githubusercontent.com/kubernetes-sigs/kustomize/master/hack/install_kustomize.sh"
.PHONY: kustomize
kustomize: $(KUSTOMIZE) ## Download kustomize locally if necessary.
$(KUSTOMIZE): $(LOCALBIN)
test -s $(LOCALBIN)/kustomize || { curl -Ss $(KUSTOMIZE_INSTALL_SCRIPT) | bash -s -- $(subst v,,$(KUSTOMIZE_VERSION)) $(LOCALBIN); }
.PHONY: controller-gen
controller-gen: $(CONTROLLER_GEN) ## Download controller-gen locally if necessary.
$(CONTROLLER_GEN): $(LOCALBIN)
test -s $(LOCALBIN)/controller-gen || GOBIN=$(LOCALBIN) go install sigs.k8s.io/controller-tools/cmd/controller-gen@$(CONTROLLER_TOOLS_VERSION)
.PHONY: envtest
envtest: $(ENVTEST) ## Download envtest-setup locally if necessary.
$(ENVTEST): $(LOCALBIN)
test -s $(LOCALBIN)/setup-envtest || GOBIN=$(LOCALBIN) go install sigs.k8s.io/controller-runtime/tools/setup-envtest@latest
And then, to make your project use the kustomize
version defined in the Makefile, replace all usage of kustomize
with $(KUSTOMIZE)
You can check all changes applied to the Makefile by looking in the samples projects generated in the testdata
directory of the Kubebuilder repository or by just by creating a new project with the Kubebuilder CLI.
Replace:
func (r *<MyKind>Reconciler) Reconcile(req ctrl.Request) (ctrl.Result, error) {
ctx := context.Background()
log := r.Log.WithValues("cronjob", req.NamespacedName)
With:
func (r *<MyKind>Reconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {
log := r.Log.WithValues("cronjob", req.NamespacedName)
Replace:
. "github.com/onsi/ginkgo"
With:
. "github.com/onsi/ginkgo/v2"
Also, adjust your test suite.
For Controller Suite:
RunSpecsWithDefaultAndCustomReporters(t,
"Controller Suite",
[]Reporter{printer.NewlineReporter{}})
With:
RunSpecs(t, "Controller Suite")
For Webhook Suite:
RunSpecsWithDefaultAndCustomReporters(t,
"Webhook Suite",
[]Reporter{printer.NewlineReporter{}})
With:
RunSpecs(t, "Webhook Suite")
Last but not least, remove the timeout variable from the BeforeSuite
blocks:
Replace:
var _ = BeforeSuite(func(done Done) {
....
}, 60)
With
var _ = BeforeSuite(func(done Done) {
....
})
In the main.go
file replace:
flag.Parse()
ctrl.SetLogger(zap.New(zap.UseDevMode(true)))
With:
opts := zap.Options{
Development: true,
}
opts.BindFlags(flag.CommandLine)
flag.Parse()
ctrl.SetLogger(zap.New(zap.UseFlagOptions(&opts)))
The manager flags --metrics-addr
and enable-leader-election
were renamed to --metrics-bind-address
and --leader-elect
to be more aligned with core Kubernetes Components. More info: #1839 .
In your main.go
file replace:
func main() {
var metricsAddr string
var enableLeaderElection bool
flag.StringVar(&metricsAddr, "metrics-addr", ":8080", "The address the metric endpoint binds to.")
flag.BoolVar(&enableLeaderElection, "enable-leader-election", false,
"Enable leader election for controller manager. "+
"Enabling this will ensure there is only one active controller manager.")
With:
func main() {
var metricsAddr string
var enableLeaderElection bool
flag.StringVar(&metricsAddr, "metrics-bind-address", ":8080", "The address the metric endpoint binds to.")
flag.BoolVar(&enableLeaderElection, "leader-elect", false,
"Enable leader election for controller manager. "+
"Enabling this will ensure there is only one active controller manager.")
And then, rename the flags in the config/default/manager_auth_proxy_patch.yaml
and config/default/manager.yaml
:
- name: manager
args:
- "--health-probe-bind-address=:8081"
- "--metrics-bind-address=127.0.0.1:8080"
- "--leader-elect"
Finally, we can run make
and make docker-build
to ensure things are working
fine.
The following steps describe a workflow to upgrade your project to remove the deprecated Kubernetes APIs: apiextensions.k8s.io/v1beta1
, admissionregistration.k8s.io/v1beta1
, cert-manager.io/v1alpha2
.
The Kubebuilder CLI tool does not support scaffolded resources for both Kubernetes API versions such as; an API/CRD with apiextensions.k8s.io/v1beta1
and another one with apiextensions.k8s.io/v1
.
If you scaffold a webhook using the Kubernetes API admissionregistration.k8s.io/v1
then, by default, it will use the API cert-manager.io/v1
in the manifests.
The first step is to update your PROJECT
file by replacing the api.crdVersion:v1beta
and webhooks.WebhookVersion:v1beta
with api.crdVersion:v1
and webhooks.WebhookVersion:v1
which would look like:
domain: my.domain
layout: go.kubebuilder.io/v3
projectName: example
repo: example
resources:
- api:
crdVersion: v1
namespaced: true
group: webapp
kind: Guestbook
version: v1
webhooks:
defaulting: true
webhookVersion: v1
version: "3"
You can try to re-create the APIS(CRDs) and Webhooks manifests by using the --force
flag.
Note, however, that the tool will re-scaffold the files which means that you will lose their content.
Before executing the commands ensure that you have the files content stored in another place. An easy option is to use git
to compare your local change with the previous version to recover the contents.
Now, re-create the APIS(CRDs) and Webhooks manifests by running the kubebuilder create api
and kubebuilder create webhook
for the same group, kind and versions with the flag --force
, respectively.
Following the migration guides from the plugins versions. Note that the plugins ecosystem
was introduced with Kubebuilder v3.0.0 release where the go/v3 version is the default layout
since 28 Apr 2021
.
Therefore, you can check here how to migrate the projects built from Kubebuilder 3.x with
the plugin go/v3 to the latest.
This document covers all breaking changes when migrating from projects built using the plugin go/v3 (default for any scaffold done since 28 Apr 2021
) to the next alpha version of the Golang plugin go/v4
.
The details of all changes (breaking or otherwise) can be found in:
go/v4
projects use Kustomize v5x (instead of v3x)
note that some manifests under config/
directory have been changed in order to no longer use the deprecated Kustomize features
such as env vars.
A kustomization.yaml
is scaffolded under config/samples
. This helps simply and flexibly generate sample manifests: kustomize build config/samples
.
adds support for Apple Silicon M1 (darwin/arm64)
remove support to CRD/WebHooks Kubernetes API v1beta1 version which are no longer supported since k8s 1.22
no longer scaffold webhook test files with "k8s.io/api/admission/v1beta1"
the k8s API which is no longer served since k8s 1.25
. By default
webhooks test files are scaffolding using "k8s.io/api/admission/v1"
which is support from k8s 1.20
no longer provide backwards compatible support with k8s versions < 1.16
change the layout to accommodate the community request to follow the Standard Go Project Layout
by moving the api(s) under a new directory called api
, controller(s) under a new directory called internal
and the main.go
under a new directory named cmd
More details on this can be found at here , but for the highlights, check below
After using the CLI to create your project, you are free to customize how you see fit. Bear in mind, that it is not recommended to deviate from the proposed layout unless you know what you are doing.
For example, you should refrain from moving the scaffolded files, doing so will make it difficult in upgrading your project in the future. You may also lose the ability to use some of the CLI features and helpers. For further information on the project layout, see the doc [What’s in a basic project?][basic-project-doc]
If you want to upgrade your scaffolding to use the latest and greatest features then, follow the guide
which will cover the steps in the most straightforward way to allow you to upgrade your project to get all
latest changes and improvements.
If you want to use the latest version of Kubebuilder CLI without changing your scaffolding then, check the following guide which will describe the steps to be performed manually to upgrade only your PROJECT version and start using the plugins versions.
This way is more complex, susceptible to errors, and success cannot be assured. Also, by following these steps you will not get the improvements and bug fixes in the default generated project files.
Make sure you understand the differences between Kubebuilder go/v3 and go/v4
before continuing.
Please ensure you have followed the installation guide
to install the required components.
The recommended way to migrate a go/v3
project is to create a new go/v4
project and
copy over the API and the reconciliation code. The conversion will end up with a
project that looks like a native go/v4 project layout (latest version).
To upgrade your project you might want to use the command kubebuilder alpha generate [OPTIONS]
.
This command will re-scaffold the project using the current Kubebuilder version.
You can run kubebuilder alpha generate --plugins=go/v4
to regenerate your project using go/v4
based in your PROJECT file config.
However, in some cases, it’s possible to do an in-place upgrade (i.e. reuse the go/v3 project layout, upgrading
the PROJECT file, and scaffolds manually). For further information see Migration from go/v3 to go/v4 by updating the files manually
For the rest of this document, we are going to use migration-project
as the project name and tutorial.kubebuilder.io
as the domain. Please, select and use appropriate values for your case.
Create a new directory with the name of your project. Note that
this name is used in the scaffolds to create the name of your manager Pod and of the Namespace where the Manager is deployed by default.
$ mkdir migration-project-name
$ cd migration-project-name
Now, we need to initialize a go/v4 project. Before we do that, we’ll need
to initialize a new go module if we’re not on the GOPATH
. While technically this is
not needed inside GOPATH
, it is still recommended.
go mod init tutorial.kubebuilder.io/migration-project
Now, we can finish initializing the project with kubebuilder.
kubebuilder init --domain tutorial.kubebuilder.io --plugins=go/v4
Next, we’ll re-scaffold out the API types and controllers.
For this example, we are going to consider that we need to scaffold both the API types and the controllers, but remember that this depends on how you scaffolded them in your original project.
kubebuilder create api --group batch --version v1 --kind CronJob
Please run kubebuilder edit --multigroup=true
to enable multi-group support before migrating the APIs and controllers. Please see this for more details.
Now, let’s copy the API definition from api/v1/<kind>_types.go
in our old project to the new one.
These files have not been modified by the new plugin, so you should be able to replace your freshly scaffolded files by your old one. There may be some cosmetic changes. So you can choose to only copy the types themselves.
Now, let’s migrate the controller code from controllers/cronjob_controller.go
in our old project to the new one.
If you don’t have any webhooks, you can skip this section.
Now let’s scaffold the webhooks for our CRD (CronJob). We’ll need to run the
following command with the --defaulting
and --programmatic-validation
flags
(since our test project uses defaulting and validating webhooks):
kubebuilder create webhook --group batch --version v1 --kind CronJob --defaulting --programmatic-validation
Now, let’s copy the webhook definition from api/v1/<kind>_webhook.go
from our old project to the new one.
If there are any manual updates in main.go
in v3, we need to port the changes to the new main.go
. We’ll also need to ensure all of needed controller-runtime schemes
have been registered.
If there are additional manifests added under config directory, port them as well. Please, be aware that
the new version go/v4 uses Kustomize v5x and no longer Kustomize v4. Therefore, if added customized
implementations in the config you need to ensure that them can work with Kustomize v5 and/if not
update/upgrade any breaking change that you might face.
In v4, installation of Kustomize has been changed from bash script to go get
. Change the kustomize
dependency in Makefile to
.PHONY: kustomize
kustomize: $(KUSTOMIZE) ## Download kustomize locally if necessary. If wrong version is installed, it will be removed before downloading.
$(KUSTOMIZE): $(LOCALBIN)
@if test -x $(LOCALBIN)/kustomize && ! $(LOCALBIN)/kustomize version | grep -q $(KUSTOMIZE_VERSION); then \
echo "$(LOCALBIN)/kustomize version is not expected $(KUSTOMIZE_VERSION). Removing it before installing."; \
rm -rf $(LOCALBIN)/kustomize; \
fi
test -s $(LOCALBIN)/kustomize || GOBIN=$(LOCALBIN) GO111MODULE=on go install sigs.k8s.io/kustomize/kustomize/v5@$(KUSTOMIZE_VERSION)
Change the image name in the Makefile if needed.
Finally, we can run make
and make docker-build
to ensure things are working
fine.
Make sure you understand the differences between Kubebuilder go/v3 and go/v4
before continuing.
Please ensure you have followed the installation guide
to install the required components.
The following guide describes the manual steps required to upgrade your PROJECT config file to begin using go/v4
.
This way is more complex, susceptible to errors, and success cannot be assured. Also, by following these steps you will not get the improvements and bug fixes in the default generated project files.
Usually it is suggested to do it manually if you have customized your project and deviated too much from the proposed scaffold. Before continuing, ensure that you understand the note about [project customizations][project-customizations]. Note that you might need to spend more effort to do this process manually than to organize your project customizations. The proposed layout will keep your project maintainable and upgradable with less effort in the future.
The recommended upgrade approach is to follow the Migration Guide go/v3 to go/v4 instead.
Update the PROJECT
file layout which stores information about the resources that are used to enable plugins make
useful decisions while scaffolding. The layout
field indicates the scaffolding and the primary plugin version in use.
The following steps describe the manual changes required to bring the project configuration file (PROJECT
).
These change will add the information that Kubebuilder would add when generating the file. This file can be found in the root directory.
Update the PROJECT file by replacing:
layout:
- go.kubebuilder.io/v3
With:
layout:
- go.kubebuilder.io/v4
The directory apis
was renamed to api
to follow the standard
The controller(s)
directory has been moved under a new directory called internal
and renamed to singular as well controller
The main.go
previously scaffolded in the root directory has been moved under a new directory called cmd
Therefore, you can check the changes in the layout results into:
...
├── cmd
│ └── main.go
├── internal
│ └── controller
└── api
Create a new directory cmd
and move the main.go
under it.
If your project support multi-group the APIs are scaffold under a directory called apis
. Rename this directory to api
Move the controllers
directory under the internal
and rename it for controller
Now ensure that the imports will be updated accordingly by:
Update the main.go
imports to look for the new path of your controllers under the internal/controller
directory
Then, let’s update the scaffolds paths
Update the Dockerfile to ensure that you will have:
COPY cmd/main.go cmd/main.go
COPY api/ api/
COPY internal/controller/ internal/controller/
Then, replace:
RUN CGO_ENABLED=0 GOOS=${TARGETOS:-linux} GOARCH=${TARGETARCH} go build -a -o manager main.go
With:
RUN CGO_ENABLED=0 GOOS=${TARGETOS:-linux} GOARCH=${TARGETARCH} go build -a -o manager cmd/main.go
Update the Makefile targets to build and run the manager by replacing:
.PHONY: build
build: manifests generate fmt vet ## Build manager binary.
go build -o bin/manager main.go
.PHONY: run
run: manifests generate fmt vet ## Run a controller from your host.
go run ./main.go
With:
.PHONY: build
build: manifests generate fmt vet ## Build manager binary.
go build -o bin/manager cmd/main.go
.PHONY: run
run: manifests generate fmt vet ## Run a controller from your host.
go run ./cmd/main.go
Update the internal/controller/suite_test.go
to set the path for the CRDDirectoryPaths
:
Replace:
CRDDirectoryPaths: []string{filepath.Join("..", "config", "crd", "bases")},
With:
CRDDirectoryPaths: []string{filepath.Join("..", "..", "config", "crd", "bases")},
Note that if your project has multiple groups (multigroup:true
) then the above update should result into "..", "..", "..",
instead of "..",".."
The PROJECT tracks the paths of all APIs used in your project. Ensure that they now point to api/...
as the following example:
Before update:
group: crew
kind: Captain
path: sigs.k8s.io/kubebuilder/testdata/project-v4/apis/crew/v1
After Update:
group: crew
kind: Captain
path: sigs.k8s.io/kubebuilder/testdata/project-v4/api/crew/v1
Update the manifest under config/
directory with all changes performed in the default scaffold done with go/v4
plugin. (see for example testdata/project-v4/config/
) to get all changes in the
default scaffolds to be applied on your project
Create config/samples/kustomization.yaml
with all Custom Resources samples specified into config/samples
. (see for example testdata/project-v4/config/samples/kustomization.yaml
)
Note that under the config/
directory you will find scaffolding changes since using
go/v4
you will ensure that you are no longer using Kustomize v3x.
You can mainly compare the config/
directory from the samples scaffolded under the testdata
directory by
checking the differences between the testdata/project-v3/config/
with testdata/project-v4/config/
which
are samples created with the same commands with the only difference being versions.
However, note that if you create your project with Kubebuilder CLI 3.0.0, its scaffolds
might change to accommodate changes up to the latest releases using go/v3
which are not considered
breaking for users and/or are forced by the changes introduced in the dependencies
used by the project such as controller-runtime and controller-tools .
Replace the import admissionv1beta1 "k8s.io/api/admission/v1beta1"
with admissionv1 "k8s.io/api/admission/v1"
in the webhook test files
Update the Makefile with the changes which can be found in the samples under testdata for the release tag used. (see for example testdata/project-v4/Makefile
)
Update the go.mod
with the changes which can be found in the samples under testdata
for the release tag used. (see for example testdata/project-v4/go.mod
). Then, run
go mod tidy
to ensure that you get the latest dependencies and your Golang code has no breaking changes.
In the steps above, you updated your project manually with the goal of ensuring that it follows
the changes in the layout introduced with the go/v4
plugin that update the scaffolds.
There is no option to verify that you properly updated the PROJECT
file of your project.
The best way to ensure that everything is updated correctly, would be to initialize a project using the go/v4
plugin,
(ie) using kubebuilder init --domain tutorial.kubebuilder.io plugins=go/v4
and generating the same API(s),
controller(s), and webhook(s) in order to compare the generated configuration with the manually changed configuration.
Also, after all updates you would run the following commands:
make manifests
(to re-generate the files using the latest version of the contrller-gen after you update the Makefile)
make all
(to ensure that you are able to build and perform all operations)
While Kubebuilder will not scaffold out a project structure compatible
with multiple API groups in the same repository by default, it’s possible
to modify the default project structure to support it.
Note that the process mainly is to ensure that your API(s) and controller(s) will be moved under new directories with their respective group name.
Let’s migrate the CronJob example .
You can verify the version by looking at the PROJECT file. The currently default and
recommended version is go/v4.
The layout go/v3 is deprecated , if you are using go/v3 it is recommended that you migrate to
go/v4, however this documentation is still valid. Migration from go/v3 to go/v4 .
To change the layout of your project to support Multi-Group run the command
kubebuilder edit --multigroup=true
. Once you switch to a multi-group layout, the new Kinds
will be generated in the new layout but additional manual work is needed
to move the old API groups to the new layout.
Generally, we use the prefix for the API group as the directory name. We
can check api/v1/groupversion_info.go
to find that out:
// +groupName=batch.tutorial.kubebuilder.io
package v1
Then, we’ll rename move our existing APIs into a new subdirectory, “batch”:
mkdir api/batch
mv api/* api/batch
After moving the APIs to a new directory, the same needs to be applied to the controllers. For go/v4:
mkdir internal/controller/batch
mv internal/controller/* internal/controller/batch/
Then, your layout has not the internal directory. So, you will move the controller(s) under a directory with the name of the API group which it is responsible for manage.
```bash
mkdir controller/batch
mv controller/* controller/batch/
```
Next, we’ll need to update all the references to the old package name.
For CronJob, the paths to be replaced would be main.go
and controllers/batch/cronjob_controller.go
to their respective locations in the new project structure.
If you’ve added additional files to your project, you’ll need to track down
imports there as well.
Finally, fix the PROJECT file manually, the command kubebuilder edit --multigroup=true
sets our project to multigroup, but it doesn’t fix the path of the existing APIs.
For each resource we need to modify the path.
For instance, for a file:
# Code generated by tool. DO NOT EDIT.
# This file is used to track the info used to scaffold your project
# and allow the plugins properly work.
# More info: https://book.kubebuilder.io/reference/project-config.html
domain: tutorial.kubebuilder.io
layout:
- go.kubebuilder.io/v4
multigroup: true
projectName: test
repo: tutorial.kubebuilder.io/project
resources:
- api:
crdVersion: v1
namespaced: true
controller: true
domain: tutorial.kubebuilder.io
group: batch
kind: CronJob
path: tutorial.kubebuilder.io/project/api/v1beta1
version: v1beta1
version: "3"
Replace path: tutorial.kubebuilder.io/project/api/v1beta1
for
path: tutorial.kubebuilder.io/project/api/batch/v1beta1
In this process, if the project is not new and has previous implemented APIs they would still need to be modified as needed.
Notice that with the multi-group
project the Kind API’s files are created under api/<group>/<version>
instead of api/<version>
.
Also, note that the controllers will be created under internal/controller/<group>
instead of internal/controller
.
That is the reason why we moved the previously generated APIs to their respective locations in the new structure.
Remember to update the references in imports accordingly.
For envtest to install CRDs correctly into the test environment, the relative path to the CRD directory needs to be updated accordingly in each internal/controller/<group>/suite_test.go
file. We need to add additional ".."
to our CRD directory relative path as shown below.
By("bootstrapping test environment")
testEnv = &envtest.Environment{
CRDDirectoryPaths: []string{filepath.Join("..", "..", "config", "crd", "bases")},
}
The CronJob tutorial explains each of these changes in
more detail (in the context of how they’re generated by Kubebuilder for
single-group projects).
Please note that all input utilized via the Kubebuilder tool is tracked in the PROJECT file (example ).
This file is responsible for storing essential information, representing various facets of the Project such as its layout,
plugins, APIs, and more. (More info ).
With the release of new plugin versions/layouts or even a new Kubebuilder CLI version with scaffold changes,
an easy way to upgrade your project is by re-scaffolding. This process allows users to employ tools like IDEs to compare
changes, enabling them to overlay their code implementation on the new scaffold or integrate these changes into their existing projects.
This command is useful when you want to upgrade an existing project to the latest version of the Kubebuilder project layout.
It makes it easier for the users to migrate their operator projects to the new scaffolding.
To upgrade the scaffold of your project to use a new plugin version:
kubebuilder alpha generate --plugins="pluginkey/version"
To upgrade the scaffold of your project to get the latest changes:
Currently, it supports two optional params, input-dir
and output-dir
.
input-dir
is the path to the existing project that you want to re-scaffold. Default is the current working directory.
output-dir
is the path to the directory where you want to generate the new project. Default is a subdirectory in the current working directory.
kubebuilder alpha generate --input-dir=/path/to/existing/project --output-dir=/path/to/new/project
If neither input-dir
nor output-dir
are specified, the project will be regenerated in the current directory.
This approach facilitates comparison between your current local branch and the version stored upstream (e.g., GitHub main branch).
This way, you can easily overlay your project’s code changes atop the new scaffold.
Kubebuilder uses a tool called controller-gen
to
generate utility code and Kubernetes object YAML, like
CustomResourceDefinitions.
To do this, it makes use of special “marker comments” (comments that start
with // +
) to indicate additional information about fields, types, and
packages. In the case of CRDs, these are generally pulled from your
_types.go
files. For more information on markers, see the marker
reference docs .
Kubebuilder provides a make
target to run controller-gen and generate
CRDs: make manifests
.
When you run make manifests
, you should see CRDs generated under the
config/crd/bases
directory. make manifests
can generate a number of
other artifacts as well – see the marker reference docs for
more details.
CRDs support declarative validation using an OpenAPI
v3 schema in the validation
section.
In general, validation markers may be
attached to fields or to types. If you’re defining complex validation, if
you need to re-use validation, or if you need to validate slice elements,
it’s often best to define a new type to describe your validation.
For example:
type ToySpec struct {
// +kubebuilder:validation:MaxLength=15
// +kubebuilder:validation:MinLength=1
Name string `json:"name,omitempty"`
// +kubebuilder:validation:MaxItems=500
// +kubebuilder:validation:MinItems=1
// +kubebuilder:validation:UniqueItems=true
Knights []string `json:"knights,omitempty"`
Alias Alias `json:"alias,omitempty"`
Rank Rank `json:"rank"`
}
// +kubebuilder:validation:Enum=Lion;Wolf;Dragon
type Alias string
// +kubebuilder:validation:Minimum=1
// +kubebuilder:validation:Maximum=3
// +kubebuilder:validation:ExclusiveMaximum=false
type Rank int32
Starting with Kubernetes 1.11, kubectl get
can ask the server what
columns to display. For CRDs, this can be used to provide useful,
type-specific information with kubectl get
, similar to the information
provided for built-in types.
The information that gets displayed can be controlled with the
additionalPrinterColumns field on your
CRD, which is controlled by the
+kubebuilder:printcolumn
marker on the Go type for
your CRD.
For instance, in the following example, we add fields to display
information about the knights, rank, and alias fields from the validation
example:
// +kubebuilder:printcolumn:name="Alias",type=string,JSONPath=`.spec.alias`
// +kubebuilder:printcolumn:name="Rank",type=integer,JSONPath=`.spec.rank`
// +kubebuilder:printcolumn:name="Bravely Run Away",type=boolean,JSONPath=`.spec.knights[?(@ == "Sir Robin")]`,description="when danger rears its ugly head, he bravely turned his tail and fled",priority=10
// +kubebuilder:printcolumn:name="Age",type="date",JSONPath=".metadata.creationTimestamp"
type Toy struct {
metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`
Spec ToySpec `json:"spec,omitempty"`
Status ToyStatus `json:"status,omitempty"`
}
CRDs can choose to implement the /status
and /scale
subresources as of Kubernetes 1.13.
It’s generally recommended that you make use of the /status
subresource
on all resources that have a status field.
Both subresources have a corresponding marker .
The status subresource is enabled via +kubebuilder:subresource:status
.
When enabled, updates at the main resource will not change status.
Similarly, updates to the status subresource cannot change anything but
the status field.
For example:
// +kubebuilder:subresource:status
type Toy struct {
metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`
Spec ToySpec `json:"spec,omitempty"`
Status ToyStatus `json:"status,omitempty"`
}
The scale subresource is enabled via +kubebuilder:subresource:scale
.
When enabled, users will be able to use kubectl scale
with your
resource. If the selectorpath
argument pointed to the string form of
a label selector, the HorizontalPodAutoscaler will be able to autoscale
your resource.
For example:
type CustomSetSpec struct {
Replicas *int32 `json:"replicas"`
}
type CustomSetStatus struct {
Replicas int32 `json:"replicas"`
Selector string `json:"selector"` // this must be the string form of the selector
}
// +kubebuilder:subresource:status
// +kubebuilder:subresource:scale:specpath=.spec.replicas,statuspath=.status.replicas,selectorpath=.status.selector
type CustomSet struct {
metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`
Spec CustomSetSpec `json:"spec,omitempty"`
Status CustomSetStatus `json:"status,omitempty"`
}
As of Kubernetes 1.13, you can have multiple versions of your Kind defined
in your CRD, and use a webhook to convert between them.
For more details on this process, see the multiversion
tutorial .
By default, Kubebuilder disables generating different validation for
different versions of the Kind in your CRD, to be compatible with older
Kubernetes versions.
You’ll need to enable this by switching the line in your makefile that
says CRD_OPTIONS ?= "crd:trivialVersions=true,preserveUnknownFields=false
to CRD_OPTIONS ?= crd:preserveUnknownFields=false
if using v1beta CRDs,
and CRD_OPTIONS ?= crd
if using v1 (recommended).
Then, you can use the +kubebuilder:storageversion
marker
to indicate the GVK that
should be used to store data by the API server.
By default, kubebuilder create api
will create CRDs of API version v1
,
a version introduced in Kubernetes v1.16. If your project intends to support
Kubernetes cluster versions older than v1.16, you must use the v1beta1
API version:
kubebuilder create api --crd-version v1beta1 ...
To support Kubernetes clusters of version v1.14 or lower, you’ll also need to
remove the controller-gen option preserveUnknownFields=false
from your Makefile.
This is done by switching the line that says
CRD_OPTIONS ?= "crd:trivialVersions=true,preserveUnknownFields=false
to CRD_OPTIONS ?= crd:trivialVersions=true
v1beta1
is deprecated and was removed in Kubernetes v1.22, so upgrading is essential.
Kubebuilder scaffolds out make rules to run controller-gen
. The rules
will automatically install controller-gen if it’s not on your path using
go install
with Go modules.
You can also run controller-gen
directly, if you want to see what it’s
doing.
Each controller-gen “generator” is controlled by an option to
controller-gen, using the same syntax as markers. controller-gen
also supports different output “rules” to control how and where output goes.
Notice the manifests
make rule (condensed slightly to only generate CRDs):
# Generate manifests for CRDs
manifests: controller-gen
$(CONTROLLER_GEN) rbac:roleName=manager-role crd webhook paths="./..." output:crd:artifacts:config=config/crd/bases
It uses the output:crd:artifacts
output rule to indicate that
CRD-related config (non-code) artifacts should end up in
config/crd/bases
instead of config/crd
.
To see all the options including generators for controller-gen
, run
$ controller-gen -h
or, for more details:
$ controller-gen -hhh
Finalizers
allow controllers to implement asynchronous pre-delete hooks. Let’s
say you create an external resource (such as a storage bucket) for each object of
your API type, and you want to delete the associated external resource
on object’s deletion from Kubernetes, you can use a finalizer to do that.
You can read more about the finalizers in the Kubernetes reference docs . The section below demonstrates how to register and trigger pre-delete hooks
in the Reconcile
method of a controller.
The key point to note is that a finalizer causes “delete” on the object to become
an “update” to set deletion timestamp. Presence of deletion timestamp on the object
indicates that it is being deleted. Otherwise, without finalizers, a delete
shows up as a reconcile where the object is missing from the cache.
Highlights:
If the object is not being deleted and does not have the finalizer registered,
then add the finalizer and update the object in Kubernetes.
If object is being deleted and the finalizer is still present in finalizers list,
then execute the pre-delete logic and remove the finalizer and update the
object.
Ensure that the pre-delete logic is idempotent.
../../cronjob-tutorial/testdata/finalizer_example.go
Licensed under the Apache License, Version 2.0 (the “License”);
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an “AS IS” BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
First, we start out with some standard imports.
As before, we need the core controller-runtime library, as well as
the client package, and the package for our API types.
package controllers
import (
"context"
"k8s.io/kubernetes/pkg/apis/batch"
ctrl "sigs.k8s.io/controller-runtime"
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/controller/controllerutil"
batchv1 "tutorial.kubebuilder.io/project/api/v1"
)
By default, kubebuilder will include the RBAC rules necessary to update finalizers for CronJobs.
//+kubebuilder:rbac:groups=batch.tutorial.kubebuilder.io,resources=cronjobs,verbs=get;list;watch;create;update;patch;delete
//+kubebuilder:rbac:groups=batch.tutorial.kubebuilder.io,resources=cronjobs/status,verbs=get;update;patch
//+kubebuilder:rbac:groups=batch.tutorial.kubebuilder.io,resources=cronjobs/finalizers,verbs=update
The code snippet below shows skeleton code for implementing a finalizer.
func (r *CronJobReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {
log := r.Log.WithValues("cronjob", req.NamespacedName)
cronJob := &batchv1.CronJob{}
if err := r.Get(ctx, req.NamespacedName, cronJob); err != nil {
log.Error(err, "unable to fetch CronJob")
// we'll ignore not-found errors, since they can't be fixed by an immediate
// requeue (we'll need to wait for a new notification), and we can get them
// on deleted requests.
return ctrl.Result{}, client.IgnoreNotFound(err)
}
// name of our custom finalizer
myFinalizerName := "batch.tutorial.kubebuilder.io/finalizer"
// examine DeletionTimestamp to determine if object is under deletion
if cronJob.ObjectMeta.DeletionTimestamp.IsZero() {
// The object is not being deleted, so if it does not have our finalizer,
// then lets add the finalizer and update the object. This is equivalent
// to registering our finalizer.
if !controllerutil.ContainsFinalizer(cronJob, myFinalizerName) {
controllerutil.AddFinalizer(cronJob, myFinalizerName)
if err := r.Update(ctx, cronJob); err != nil {
return ctrl.Result{}, err
}
}
} else {
// The object is being deleted
if controllerutil.ContainsFinalizer(cronJob, myFinalizerName) {
// our finalizer is present, so lets handle any external dependency
if err := r.deleteExternalResources(cronJob); err != nil {
// if fail to delete the external dependency here, return with error
// so that it can be retried.
return ctrl.Result{}, err
}
// remove our finalizer from the list and update it.
controllerutil.RemoveFinalizer(cronJob, myFinalizerName)
if err := r.Update(ctx, cronJob); err != nil {
return ctrl.Result{}, err
}
}
// Stop reconciliation as the item is being deleted
return ctrl.Result{}, nil
}
// Your reconcile logic
return ctrl.Result{}, nil
}
func (r *Reconciler) deleteExternalResources(cronJob *batch.CronJob) error {
//
// delete any external resources associated with the cronJob
//
// Ensure that delete implementation is idempotent and safe to invoke
// multiple times for same object.
}
When you create a project using Kubebuilder, see the scaffolded code generated under cmd/main.go
. This code initializes a Manager , and the project relies on the controller-runtime framework. The Manager manages Controllers , which offer a reconcile function that synchronizes resources until the desired state is achieved within the cluster.
Reconciliation is an ongoing loop that executes necessary operations to maintain the desired state, adhering to Kubernetes principles, such as the control loop . For further information, check out the Operator patterns documentation from Kubernetes to better understand those concepts.
When developing operators, the controller’s reconciliation loop needs to be idempotent. By following the Operator pattern we create controllers that provide a reconcile function responsible for synchronizing resources until the desired state is reached on the cluster. Developing idempotent solutions will allow the reconciler to correctly respond to generic or unexpected events, easily deal with application startup or upgrade. More explanation on this is available here .
Writing reconciliation logic according to specific events, breaks the recommendation of operator pattern and goes against the design principles of controller-runtime . This may lead to unforeseen consequences, such as resources becoming stuck and requiring manual intervention.
Building your operator commonly involves extending the Kubernetes API itself. It is helpful to understand precisely how Custom Resource Definitions (CRDs) interact with the Kubernetes API. Also, the Kubebuilder documentation on Groups and Versions and Kinds may be helpful to understand these concepts better as they relate to operators.
Additionally, we recommend checking the documentation on Operator patterns from Kubernetes to better understand the purpose of the standard solutions built with KubeBuilder.
Embracing the Kubernetes API conventions and standards is crucial for maximizing the potential of your applications and deployments. By adhering to these established practices, you can benefit in several ways.
Firstly, adherence ensures seamless interoperability within the Kubernetes ecosystem. Following conventions allows your applications to work harmoniously with other components, reducing compatibility issues and promoting a consistent user experience.
Secondly, sticking to API standards enhances the maintainability and troubleshooting of your applications. Adopting familiar patterns and structures makes debugging and supporting your deployments easier, leading to more efficient operations and quicker issue resolution.
Furthermore, leveraging the Kubernetes API conventions empowers you to harness the platform’s full capabilities. By working within the defined framework, you can leverage the rich set of features and resources offered by Kubernetes, enabling scalability, performance optimization, and resilience.
Lastly, embracing these standards future-proofs your native solutions. By aligning with the evolving Kubernetes ecosystem, you ensure compatibility with future updates, new features, and enhancements introduced by the vibrant Kubernetes community.
In summary, by adhering to the Kubernetes API conventions and standards, you unlock the potential for seamless integration, simplified maintenance, optimal performance, and future-readiness, all contributing to the success of your applications and deployments.
Avoid a design solution where the same controller reconciles more than one Kind. Having many Kinds (such as CRDs), that are all managed by the same controller, usually goes against the design proposed by controller-runtime. Furthermore, this might hurt concepts such as encapsulation, the Single Responsibility Principle, and Cohesion. Damaging these concepts may cause unexpected side effects and increase the difficulty of extending, reusing, or maintaining the operator.
Having one controller manage many Custom Resources (CRs) in an Operator can lead to several issues:
Complexity : A single controller managing multiple CRs can increase the complexity of the code, making it harder to understand, maintain, and debug.
Scalability : Each controller typically manages a single kind of CR for scalability. If a single controller handles multiple CRs, it could become a bottleneck, reducing the overall efficiency and responsiveness of your system.
Single Responsibility Principle : Following this principle from software engineering, each controller should ideally have only one job. This approach simplifies development and debugging, and makes the system more robust.
Error Isolation : If one controller manages multiple CRs and an error occurs, it could potentially impact all the CRs it manages. Having a single controller per CR ensures that an issue with one controller or CR does not directly affect others.
Concurrency and Synchronization : A single controller managing multiple CRs could lead to race conditions and require complex synchronization, especially if the CRs have interdependencies.
In conclusion, while it might seem efficient to have a single controller manage multiple CRs, it often leads to higher complexity, lower scalability, and potential stability issues. It’s generally better to adhere to the single responsibility principle, where each CR is managed by its own controller.
Managing a single Custom Resource (CR) with multiple controllers can lead to several challenges:
Race conditions : When multiple controllers attempt to reconcile the same CR concurrently, race conditions can emerge. These conditions can produce inconsistent or unpredictable outcomes. For example, if we try to update the CR to add a status condition, we may encounter a range of errors such as “the object has been modified; please apply your changes to the latest version and try again”, triggering a repetitive reconciliation process.
Concurrency issues : When controllers have different interpretations of the CR’s state, they may constantly overwrite each other’s changes. This conflict can create a loop, with the controllers ceaselessly disputing the CR’s state.
Maintenance and support difficulties : Coordinating the logic for multiple controllers operating on the same CR can increase system complexity, making it more challenging to understand or troubleshoot. Typically, a system’s behavior is easier to comprehend when each CR is managed by a single controller.
Status tracking complications : We may struggle to work adequately with status conditions to accurately track the state of each component managed by the Installer.
Performance issues : If multiple controllers are watching and reconciling the Installer Kind, redundant operations may occur, leading to unnecessary resource usage.
These challenges underline the importance of assigning each controller the single responsibility of managing its own CR. This will streamline our processes and ensure a more reliable system.
We recommend you manage your solutions using Status Conditionals following the K8s Api conventions because:
Standardization : Conditions provide a standardized way to represent the state of an Operator’s custom resources, making it easier for users and tools to understand and interpret the resource’s status.
Readability : Conditions can clearly express complex states by using a combination of multiple conditions, making it easier for users to understand the current state and progress of the resource.
Extensibility : As new features or states are added to your Operator, conditions can be easily extended to represent these new states without requiring significant changes to the existing API or structure.
Observability : Status conditions can be monitored and tracked by cluster administrators and external monitoring tools, enabling better visibility into the state of the custom resources managed by the Operator.
Compatibility : By adopting the common pattern of using conditions in Kubernetes APIs, Operator authors ensure their custom resources align with the broader ecosystem, which helps users to have a consistent experience when interacting with multiple Operators and resources in their clusters.
Check out the Deploy Image Plugin . This plugin allows users to scaffold API/Controllers to deploy and manage an Operand (image) on the cluster following the guidelines and best practices. It abstracts the
complexities of achieving this goal while allowing users to customize the generated code.
Therefore, you can check an example of Status Conditional usage by looking at its API(s) scaffolded and code implemented under the Reconciliation into its Controllers.
It is often useful to publish Event objects from the controller Reconcile function as they allow users or any automated processes to see what is going on with a particular object and respond to them.
Recent Events for an object can be viewed by running $ kubectl describe <resource kind> <resource name>
. Also, they can be checked by running $ kubectl get events
.
Be aware that it is not recommended to emit Events for all operations. If authors raise too many events, it brings bad UX experiences for those consuming the solutions on the cluster, and they may find it difficult to filter an actionable event from the clutter. For more information, please take a look at the Kubernetes APIs convention .
Anatomy of an Event:
Event(object runtime.Object, eventtype, reason, message string)
object
is the object this event is about.
eventtype
is this event type, and is either Normal or Warning . (More info )
reason
is the reason this event is generated. It should be short and unique with UpperCamelCase
format. The value could appear in switch statements by automation. (More info )
message
is intended to be consumed by humans. (More info )
Following is an example of a code implementation that raises an Event.
// The following implementation will raise an event
r.Recorder.Event(cr, "Warning", "Deleting",
fmt.Sprintf("Custom Resource %s is being deleted from the namespace %s",
cr.Name,
cr.Namespace))
Following are the steps with examples to help you raise events in your controller’s reconciliations.
Events are published from a Controller using an EventRecorder type CorrelatorOptions struct
,
which can be created for a Controller by calling GetRecorder(name string)
on a Manager. See that we will change the implementation scaffolded in cmd/main.go
:
if err = (&controller.MyKindReconciler{
Client: mgr.GetClient(),
Scheme: mgr.GetScheme(),
// Note that we added the following line:
Recorder: mgr.GetEventRecorderFor("mykind-controller"),
}).SetupWithManager(mgr); err != nil {
setupLog.Error(err, "unable to create controller", "controller", "MyKind")
os.Exit(1)
}
To raise an event, you must have access to record.EventRecorder
in the Controller. Therefore, firstly let’s update the controller implementation:
import (
...
"k8s.io/client-go/tools/record"
...
)
// MyKindReconciler reconciles a MyKind object
type MyKindReconciler struct {
client.Client
Scheme *runtime.Scheme
// See that we added the following code to allow us to pass the record.EventRecorder
Recorder record.EventRecorder
}
Events are published from a Controller using an [EventRecorder]type CorrelatorOptions struct
,
which can be created for a Controller by calling GetRecorder(name string)
on a Manager. See that we will change the implementation scaffolded in cmd/main.go
:
if err = (&controller.MyKindReconciler{
Client: mgr.GetClient(),
Scheme: mgr.GetScheme(),
// Note that we added the following line:
Recorder: mgr.GetEventRecorderFor("mykind-controller"),
}).SetupWithManager(mgr); err != nil {
setupLog.Error(err, "unable to create controller", "controller", "MyKind")
os.Exit(1)
}
You must also grant the RBAC rules permissions to allow your project to create Events. Therefore, ensure that you add the RBAC into your controller:
...
//+kubebuilder:rbac:groups=core,resources=events,verbs=create;patch
...
func (r *MyKindReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {
And then, run $ make manifests
to update the rules under config/rbac/role.yaml
.
Inside a Reconcile()
control loop, you are looking to do a collection of operations until it has the desired state on the cluster.
Therefore, it can be necessary to know when a resource that you care about is changed.
In the case that there is an action (create, update, edit, delete, etc.) on a watched resource, Reconcile()
should be called for the resources watching it.
Controller Runtime libraries provide many ways for resources to be managed and watched.
This ranges from the easy and obvious use cases, such as watching the resources which were created and managed by the controller, to more unique and advanced use cases.
See each subsection for explanations and examples of the different ways in which your controller can Watch the resources it cares about.
Watching Operator Managed Resources -
These resources are created and managed by the same operator as the resource watching them.
This section covers both if they are managed by the same controller or separate controllers.
Watching Externally Managed Resources -
These resources could be manually created, or managed by other operators/controllers or the Kubernetes control plane.
Kubebuilder and the Controller Runtime libraries allow for controllers
to implement the logic of their CRD through easy management of Kubernetes resources.
Managing dependency resources is fundamental to a controller, and it’s not possible to manage them without watching for changes to their state.
Deployments must know when the ReplicaSets that they manage are changed
ReplicaSets must know when their Pods are deleted, or change from healthy to unhealthy.
Through the Owns()
functionality, Controller Runtime provides an easy way to watch dependency resources for changes.
A resource can be defined as dependent on another resource through the ‘ownerReferences’ field.
As an example, we are going to create a SimpleDeployment
resource.
The SimpleDeployment
’s purpose is to manage a Deployment
that users can change certain aspects of, through the SimpleDeployment
Spec.
The SimpleDeployment
controller’s purpose is to make sure that it’s owned Deployment
(has an ownerReference which points to SimpleDeployment
resource) always uses the settings provided by the user.
owned-resource/api.go
Licensed under the Apache License, Version 2.0 (the “License”);
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an “AS IS” BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
package owned_resource
import (
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
In this example the controller is doing basic management of a Deployment object.
The Spec here allows the user to customize the deployment created in various ways.
For example, the number of replicas it runs with.
// SimpleDeploymentSpec defines the desired state of SimpleDeployment
type SimpleDeploymentSpec struct {
// INSERT ADDITIONAL SPEC FIELDS - desired state of cluster
// Important: Run "make" to regenerate code after modifying this file
// The number of replicas that the deployment should have
// +optional
Replicas *int `json:"replicas,omitempty"`
}
The rest of the API configuration is covered in the CronJob tutorial.
// SimpleDeploymentStatus defines the observed state of SimpleDeployment
type SimpleDeploymentStatus struct {
// INSERT ADDITIONAL STATUS FIELD - define observed state of cluster
// Important: Run "make" to regenerate code after modifying this file
}
//+kubebuilder:object:root=true
//+kubebuilder:subresource:status
// SimpleDeployment is the Schema for the simpledeployments API
type SimpleDeployment struct {
metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`
Spec SimpleDeploymentSpec `json:"spec,omitempty"`
Status SimpleDeploymentStatus `json:"status,omitempty"`
}
//+kubebuilder:object:root=true
// SimpleDeploymentList contains a list of SimpleDeployment
type SimpleDeploymentList struct {
metav1.TypeMeta `json:",inline"`
metav1.ListMeta `json:"metadata,omitempty"`
Items []SimpleDeployment `json:"items"`
}
func init() {
SchemeBuilder.Register(&SimpleDeployment{}, &SimpleDeploymentList{})
}
owned-resource/controller.go
Licensed under the Apache License, Version 2.0 (the “License”);
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an “AS IS” BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
Along with the standard imports, we need additional controller-runtime and apimachinery libraries.
The extra imports are necessary for managing the objects that are “Owned” by the controller.
package owned_resource
import (
"context"
"github.com/go-logr/logr"
kapps "k8s.io/api/apps/v1"
"k8s.io/apimachinery/pkg/api/errors"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/types"
ctrl "sigs.k8s.io/controller-runtime"
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/controller/controllerutil"
appsv1 "tutorial.kubebuilder.io/project/api/v1"
)
// SimpleDeploymentReconciler reconciles a SimpleDeployment object
type SimpleDeploymentReconciler struct {
client.Client
Log logr.Logger
Scheme *runtime.Scheme
}
In addition to the SimpleDeployment
permissions, we will also need permissions to manage Deployments
.
In order to fully manage the workflow of deployments, our app will need to be able to use all verbs on a deployment as well as “get” it’s status.
//+kubebuilder:rbac:groups=apps.tutorial.kubebuilder.io,resources=simpledeployments,verbs=get;list;watch;create;update;patch;delete
//+kubebuilder:rbac:groups=apps.tutorial.kubebuilder.io,resources=simpledeployments/status,verbs=get;update;patch
//+kubebuilder:rbac:groups=apps.tutorial.kubebuilder.io,resources=simpledeployments/finalizers,verbs=update
//+kubebuilder:rbac:groups=apps,resources=deployments,verbs=get;list;watch;create;update;patch;delete
//+kubebuilder:rbac:groups=apps,resources=deployments/status,verbs=get
Reconcile
will be in charge of reconciling the state of SimpleDeployments
.
In this basic example, SimpleDeployments
are used to create and manage simple Deployments
that can be configured through the SimpleDeployment
Spec.
// Reconcile is part of the main kubernetes reconciliation loop which aims to
// move the current state of the cluster closer to the desired state.
func (r *SimpleDeploymentReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {
log := r.Log.WithValues("simpleDeployment", req.NamespacedName)
var simpleDeployment appsv1.SimpleDeployment
if err := r.Get(ctx, req.NamespacedName, &simpleDeployment); err != nil {
log.Error(err, "unable to fetch SimpleDeployment")
// we'll ignore not-found errors, since they can't be fixed by an immediate
// requeue (we'll need to wait for a new notification), and we can get them
// on deleted requests.
return ctrl.Result{}, client.IgnoreNotFound(err)
}
Build the deployment that we want to see exist within the cluster
deployment := &kapps.Deployment{}
// Set the information you care about
deployment.Spec.Replicas = simpleDeployment.Spec.Replicas
Set the controller reference, specifying that this Deployment
is controlled by the SimpleDeployment
being reconciled.
This will allow for the SimpleDeployment
to be reconciled when changes to the Deployment
are noticed.
if err := controllerutil.SetControllerReference(simpleDeployment, deployment, r.scheme); err != nil {
return ctrl.Result{}, err
}
Manage your Deployment
.
Create it if it doesn’t exist.
Update it if it is configured incorrectly.
foundDeployment := &kapps.Deployment{}
err := r.Get(ctx, types.NamespacedName{Name: deployment.Name, Namespace: deployment.Namespace}, foundDeployment)
if err != nil && errors.IsNotFound(err) {
log.V(1).Info("Creating Deployment", "deployment", deployment.Name)
err = r.Create(ctx, deployment)
} else if err == nil {
if foundDeployment.Spec.Replicas != deployment.Spec.Replicas {
foundDeployment.Spec.Replicas = deployment.Spec.Replicas
log.V(1).Info("Updating Deployment", "deployment", deployment.Name)
err = r.Update(ctx, foundDeployment)
}
}
return ctrl.Result{}, err
}
Finally, we add this reconciler to the manager, so that it gets started
when the manager is started.
Since we create dependency Deployments
during the reconcile, we can specify that the controller Owns
Deployments
.
This will tell the manager that if a Deployment
, or its status, is updated, then the SimpleDeployment
in its ownerRef field should be reconciled.
// SetupWithManager sets up the controller with the Manager.
func (r *SimpleDeploymentReconciler) SetupWithManager(mgr ctrl.Manager) error {
return ctrl.NewControllerManagedBy(mgr).
For(&appsv1.SimpleDeployment{}).
Owns(&kapps.Deployment{}).
Complete(r)
}
By default, Kubebuilder and the Controller Runtime libraries allow for controllers
to easily watch the resources that they manage as well as dependent resources that are Owned
by the controller.
However, those are not always the only resources that need to be watched in the cluster.
There are many examples of Resource Specs that allow users to reference external resources.
Ingresses have references to Service objects
Pods have references to ConfigMaps, Secrets and Volumes
Deployments and Services have references to Pods
This same functionality can be added to CRDs and custom controllers.
This will allow for resources to be reconciled when another resource it references is changed.
As an example, we are going to create a ConfigDeployment
resource.
The ConfigDeployment
’s purpose is to manage a Deployment
whose pods are always using the latest version of a ConfigMap
.
While ConfigMaps are auto-updated within Pods, applications may not always be able to auto-refresh config from the file system.
Some applications require restarts to apply configuration updates.
The ConfigDeployment
CRD will hold a reference to a ConfigMap inside its Spec.
The ConfigDeployment
controller will be in charge of creating a deployment with Pods that use the ConfigMap.
These pods should be updated anytime that the referenced ConfigMap changes, therefore the ConfigDeployments will need to be reconciled on changes to the referenced ConfigMap.
external-indexed-field/api.go
Licensed under the Apache License, Version 2.0 (the “License”);
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an “AS IS” BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
package external_indexed_field
import (
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
In our type’s Spec, we want to allow the user to pass in a reference to a configMap in the same namespace.
It’s also possible for this to be a namespaced reference, but in this example we will assume that the referenced object
lives in the same namespace.
This field does not need to be optional.
If the field is required, the indexing code in the controller will need to be modified.
// ConfigDeploymentSpec defines the desired state of ConfigDeployment
type ConfigDeploymentSpec struct {
// INSERT ADDITIONAL SPEC FIELDS - desired state of cluster
// Important: Run "make" to regenerate code after modifying this file
// Name of an existing ConfigMap in the same namespace, to add to the deployment
// +optional
ConfigMap string `json:"configMap,omitempty"`
}
The rest of the API configuration is covered in the CronJob tutorial.
// ConfigDeploymentStatus defines the observed state of ConfigDeployment
type ConfigDeploymentStatus struct {
// INSERT ADDITIONAL STATUS FIELD - define observed state of cluster
// Important: Run "make" to regenerate code after modifying this file
}
//+kubebuilder:object:root=true
//+kubebuilder:subresource:status
// ConfigDeployment is the Schema for the configdeployments API
type ConfigDeployment struct {
metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`
Spec ConfigDeploymentSpec `json:"spec,omitempty"`
Status ConfigDeploymentStatus `json:"status,omitempty"`
}
//+kubebuilder:object:root=true
// ConfigDeploymentList contains a list of ConfigDeployment
type ConfigDeploymentList struct {
metav1.TypeMeta `json:",inline"`
metav1.ListMeta `json:"metadata,omitempty"`
Items []ConfigDeployment `json:"items"`
}
func init() {
SchemeBuilder.Register(&ConfigDeployment{}, &ConfigDeploymentList{})
}
external-indexed-field/controller.go
Licensed under the Apache License, Version 2.0 (the “License”);
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an “AS IS” BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
Along with the standard imports, we need additional controller-runtime and apimachinery libraries.
All additional libraries, necessary for Watching, have the comment Required For Watching
appended.
package external_indexed_field
import (
"context"
"github.com/go-logr/logr"
kapps "k8s.io/api/apps/v1"
corev1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/fields" // Required for Watching
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/types" // Required for Watching
ctrl "sigs.k8s.io/controller-runtime"
"sigs.k8s.io/controller-runtime/pkg/builder" // Required for Watching
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/handler" // Required for Watching
"sigs.k8s.io/controller-runtime/pkg/predicate" // Required for Watching
"sigs.k8s.io/controller-runtime/pkg/reconcile" // Required for Watching
"sigs.k8s.io/controller-runtime/pkg/source" // Required for Watching
appsv1 "tutorial.kubebuilder.io/project/api/v1"
)
Determine the path of the field in the ConfigDeployment CRD that we wish to use as the “object reference”.
This will be used in both the indexing and watching.
const (
configMapField = ".spec.configMap"
)
// ConfigDeploymentReconciler reconciles a ConfigDeployment object
type ConfigDeploymentReconciler struct {
client.Client
Log logr.Logger
Scheme *runtime.Scheme
}
There are two additional resources that the controller needs to have access to, other than ConfigDeployments.
It needs to be able to fully manage Deployments, as well as check their status.
It also needs to be able to get, list and watch ConfigMaps.
All 3 of these are important, and you will see usages of each below.
//+kubebuilder:rbac:groups=apps.tutorial.kubebuilder.io,resources=configdeployments,verbs=get;list;watch;create;update;patch;delete
//+kubebuilder:rbac:groups=apps.tutorial.kubebuilder.io,resources=configdeployments/status,verbs=get;update;patch
//+kubebuilder:rbac:groups=apps.tutorial.kubebuilder.io,resources=configdeployments/finalizers,verbs=update
//+kubebuilder:rbac:groups=apps,resources=deployments,verbs=get;list;watch;create;update;patch;delete
//+kubebuilder:rbac:groups=apps,resources=deployments/status,verbs=get
//+kubebuilder:rbac:groups="",resources=configmaps,verbs=get;list;watch
Reconcile
will be in charge of reconciling the state of ConfigDeployments.
ConfigDeployments are used to manage Deployments whose pods are updated whenever the configMap that they use is updated.
For that reason we need to add an annotation to the PodTemplate within the Deployment we create.
This annotation will keep track of the latest version of the data within the referenced ConfigMap.
Therefore when the version of the configMap is changed, the PodTemplate in the Deployment will change.
This will cause a rolling upgrade of all Pods managed by the Deployment.
Skip down to the SetupWithManager
function to see how we ensure that Reconcile
is called when the referenced ConfigMaps
are updated.
// Reconcile is part of the main kubernetes reconciliation loop which aims to
// move the current state of the cluster closer to the desired state.
func (r *ConfigDeploymentReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {
log := r.Log.WithValues("configDeployment", req.NamespacedName)
var configDeployment appsv1.ConfigDeployment
if err := r.Get(ctx, req.NamespacedName, &configDeployment); err != nil {
log.Error(err, "unable to fetch ConfigDeployment")
// we'll ignore not-found errors, since they can't be fixed by an immediate
// requeue (we'll need to wait for a new notification), and we can get them
// on deleted requests.
return ctrl.Result{}, client.IgnoreNotFound(err)
}
// your logic here
var configMapVersion string
if configDeployment.Spec.ConfigMap != "" {
configMapName := configDeployment.Spec.ConfigMap
foundConfigMap := &corev1.ConfigMap{}
err := r.Get(ctx, types.NamespacedName{Name: configMapName, Namespace: configDeployment.Namespace}, foundConfigMap)
if err != nil {
// If a configMap name is provided, then it must exist
// You will likely want to create an Event for the user to understand why their reconcile is failing.
return ctrl.Result{}, err
}
// Hash the data in some way, or just use the version of the Object
configMapVersion = foundConfigMap.ResourceVersion
}
// Logic here to add the configMapVersion as an annotation on your Deployment Pods.
return ctrl.Result{}, nil
}
Finally, we add this reconciler to the manager, so that it gets started
when the manager is started.
Since we create dependency Deployments during the reconcile, we can specify that the controller Owns
Deployments.
However the ConfigMaps that we want to watch are not owned by the ConfigDeployment object.
Therefore we must specify a custom way of watching those objects.
This watch logic is complex, so we have split it into a separate method.
// SetupWithManager sets up the controller with the Manager.
func (r *ConfigDeploymentReconciler) SetupWithManager(mgr ctrl.Manager) error {
The configMap
field must be indexed by the manager, so that we will be able to lookup ConfigDeployments
by a referenced ConfigMap
name.
This will allow for quickly answer the question:
If ConfigMap x is updated, which ConfigDeployments are affected?
if err := mgr.GetFieldIndexer().IndexField(context.Background(), &appsv1.ConfigDeployment{}, configMapField, func(rawObj client.Object) []string {
// Extract the ConfigMap name from the ConfigDeployment Spec, if one is provided
configDeployment := rawObj.(*appsv1.ConfigDeployment)
if configDeployment.Spec.ConfigMap == "" {
return nil
}
return []string{configDeployment.Spec.ConfigMap}
}); err != nil {
return err
}
As explained in the CronJob tutorial, the controller will first register the Type that it manages, as well as the types of subresources that it controls.
Since we also want to watch ConfigMaps that are not controlled or managed by the controller, we will need to use the Watches()
functionality as well.
The Watches()
function is a controller-runtime API that takes:
A Kind (i.e. ConfigMap
)
A mapping function that converts a ConfigMap
object to a list of reconcile requests for ConfigDeployments
.
We have separated this out into a separate function.
A list of options for watching the ConfigMaps
In our case, we only want the watch to be triggered when the ResourceVersion of the ConfigMap is changed.
return ctrl.NewControllerManagedBy(mgr).
For(&appsv1.ConfigDeployment{}).
Owns(&kapps.Deployment{}).
Watches(
&source.Kind{Type: &corev1.ConfigMap{}},
handler.EnqueueRequestsFromMapFunc(r.findObjectsForConfigMap),
builder.WithPredicates(predicate.ResourceVersionChangedPredicate{}),
).
Complete(r)
}
Because we have already created an index on the configMap
reference field, this mapping function is quite straight forward.
We first need to list out all ConfigDeployments
that use ConfigMap
given in the mapping function.
This is done by merely submitting a List request using our indexed field as the field selector.
When the list of ConfigDeployments
that reference the ConfigMap
is found,
we just need to loop through the list and create a reconcile request for each one.
If an error occurs fetching the list, or no ConfigDeployments
are found, then no reconcile requests will be returned.
func (r *ConfigDeploymentReconciler) findObjectsForConfigMap(ctx context.Context, configMap client.Object) []reconcile.Request {
attachedConfigDeployments := &appsv1.ConfigDeploymentList{}
listOps := &client.ListOptions{
FieldSelector: fields.OneTermEqualSelector(configMapField, configMap.GetName()),
Namespace: configMap.GetNamespace(),
}
err := r.List(ctx, attachedConfigDeployments, listOps)
if err != nil {
return []reconcile.Request{}
}
requests := make([]reconcile.Request, len(attachedConfigDeployments.Items))
for i, item := range attachedConfigDeployments.Items {
requests[i] = reconcile.Request{
NamespacedName: types.NamespacedName{
Name: item.GetName(),
Namespace: item.GetNamespace(),
},
}
}
return requests
}
This only cover the basics to use a kind cluster. You can find more details at
kind documentation .
You can follow this to
install kind
.
You can simply create a kind
cluster by
kind create cluster
To customize your cluster, you can provide additional configuration.
For example, the following is a sample kind
configuration.
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
- role: worker
- role: worker
- role: worker
Using the configuration above, run the following command will give you a k8s
v1.17.2 cluster with 1 control-plane node and 3 worker nodes.
kind create cluster --config hack/kind-config.yaml --image=kindest/node:v1.17.2
You can use --image
flag to specify the cluster version you want, e.g.
--image=kindest/node:v1.17.2
, the supported version are listed
here .
When developing with a local kind cluster, loading docker images to the cluster
is a very useful feature. You can avoid using a container registry.
kind load docker-image your-image-name:your-tag
See Load a local image into a kind cluster for more information.
kind delete cluster
Webhooks are requests for information sent in a blocking fashion. A web
application implementing webhooks will send a HTTP request to other applications
when a certain event happens.
In the kubernetes world, there are 3 kinds of webhooks:
admission webhook ,
authorization webhook and
CRD conversion webhook .
In controller-runtime
libraries, we support admission webhooks and CRD conversion webhooks.
Kubernetes supports these dynamic admission webhooks as of version 1.9 (when the
feature entered beta).
Kubernetes supports the conversion webhooks as of version 1.15 (when the
feature entered beta).
By default, kubebuilder create webhook
will create webhook configs of API version v1
,
a version introduced in Kubernetes v1.16. If your project intends to support
Kubernetes cluster versions older than v1.16, you must use the v1beta1
API version:
kubebuilder create webhook --webhook-version v1beta1 ...
v1beta1
is deprecated and will be removed in a future Kubernetes release, so upgrading is recommended.
Admission webhooks are HTTP callbacks that receive admission requests, process
them and return admission responses.
Kubernetes provides the following types of admission webhooks:
Mutating Admission Webhook :
These can mutate the object while it’s being created or updated, before it gets
stored. It can be used to default fields in a resource requests, e.g. fields in
Deployment that are not specified by the user. It can be used to inject sidecar
containers.
Validating Admission Webhook :
These can validate the object while it’s being created or updated, before it gets
stored. It allows more complex validation than pure schema-based validation.
e.g. cross-field validation and pod image whitelisting.
The apiserver by default doesn’t authenticate itself to the webhooks. However,
if you want to authenticate the clients, you can configure the apiserver to use
basic auth, bearer token, or a cert to authenticate itself to the webhooks.
You can find detailed steps
here .
Execution Order
Validating webhooks run after all mutating webhooks , so you don’t need to worry about another webhook changing an
object after your validation has accepted it.
Modify status
You cannot modify or default the status of a resource using a mutating admission webhook .
Set initial status in your controller when you first see a new object.
Mutating Admission Webhooks are primarily designed to intercept and modify requests concerning the creation,
modification, or deletion of objects. Though they possess the capability to modify an object’s specification,
directly altering its status isn’t deemed a standard practice,
often leading to unintended results.
// MutatingWebhookConfiguration allows for modification of objects.
// However, direct modification of the status might result in unexpected behavior.
type MutatingWebhookConfiguration struct {
...
}
For those diving into custom controllers for custom resources, it’s imperative to grasp the concept of setting an
initial status. This initialization typically takes place within the controller itself. The moment the controller
identifies a new instance of its managed resource, primarily through a watch mechanism, it holds the authority
to assign an initial status to that resource.
// Custom controller's reconcile function might look something like this:
func (r *ReconcileMyResource) Reconcile(request reconcile.Request) (reconcile.Result, error) {
// ...
// Upon discovering a new instance, set the initial status
instance.Status = SomeInitialStatus
// ...
}
Delving into Kubernetes custom resources, a clear demarcation exists between the spec (depicting the desired state)
and the status (illustrating the observed state). Activating the /status subresource for a custom resource definition
(CRD) bifurcates the status
and spec
, each assigned to its respective API endpoint.
This separation ensures that changes introduced by users, such as modifying the spec, and system-driven updates,
like status alterations, remain distinct. Leveraging a mutating webhook to tweak the status during a spec-modifying
operation might not pan out as expected, courtesy of this isolation.
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: myresources.mygroup.mydomain
spec:
...
subresources:
status: {} # Enables the /status subresource
While certain edge scenarios might allow a mutating webhook to seamlessly modify the status, treading this path isn’t a
universally acclaimed or recommended strategy. Entrusting the controller logic with status updates remains the
most advocated approach.
It is very easy to build admission webhooks for CRDs, which has been covered in
the CronJob tutorial . Given that kubebuilder doesn’t support webhook scaffolding
for core types, you have to use the library from controller-runtime to handle it.
There is an example
in controller-runtime.
It is suggested to use kubebuilder to initialize a project, and then you can
follow the steps below to add admission webhooks for core types.
You need to have your handler implements the
admission.Handler
interface.
type podAnnotator struct {
Client client.Client
decoder *admission.Decoder
}
func (a *podAnnotator) Handle(ctx context.Context, req admission.Request) admission.Response {
pod := &corev1.Pod{}
err := a.decoder.Decode(req, pod)
if err != nil {
return admission.Errored(http.StatusBadRequest, err)
}
// mutate the fields in pod
marshaledPod, err := json.Marshal(pod)
if err != nil {
return admission.Errored(http.StatusInternalServerError, err)
}
return admission.PatchResponseFromRaw(req.Object.Raw, marshaledPod)
}
Note : in order to have controller-gen generate the webhook configuration for
you, you need to add markers. For example,
// +kubebuilder:webhook:path=/mutate-v1-pod,mutating=true,failurePolicy=fail,groups="",resources=pods,verbs=create;update,versions=v1,name=mpod.kb.io
Now you need to register your handler in the webhook server.
mgr.GetWebhookServer().Register("/mutate-v1-pod", &webhook.Admission{Handler: &podAnnotator{Client: mgr.GetClient()}})
You need to ensure the path here match the path in the marker.
If you need a client and/or decoder, just pass them in at struct construction time.
mgr.GetWebhookServer().Register("/mutate-v1-pod", &webhook.Admission{
Handler: &podAnnotator{
Client: mgr.GetClient(),
decoder: admission.NewDecoder(mgr.GetScheme()),
},
})
Deploying it is just like deploying a webhook server for CRD. You need to
provision the serving certificate
deploy the server
You can follow the tutorial .
Kubebuilder makes use of a tool called
controller-gen for
generating utility code and Kubernetes YAML. This code and config
generation is controlled by the presence of special “marker comments” in
Go code.
Markers are single-line comments that start with a plus, followed by
a marker name, optionally followed by some marker specific configuration:
// +kubebuilder:validation:Optional
// +kubebuilder:validation:MaxItems=2
// +kubebuilder:printcolumn:JSONPath=".status.replicas",name=Replicas,type=string
Controller-gen supports both (see the output of controller-gen crd -www
). +kubebuilder:validation:Optional
and +optional
can be applied to fields.
But +kubebuilder:validation:Optional
can also be applied at the package level such that it applies to every field in the package.
If you’re using controller-gen only then they’re redundant, but if you’re using other generators or you want developers that need to build their own clients for your API, you’ll want to also include +optional
.
The most reliable way in 1.x to get +optional
is omitempty
.
See each subsection for information about different types of code and YAML
generation.
Kubebuilder projects have two make
targets that make use of
controller-gen:
See Generating CRDs for a comprehensive overview.
Exact syntax is described in the godocs for
controller-tools .
In general, markers may either be:
Empty (+kubebuilder:validation:Optional
): empty markers are like boolean flags on the command line
– just specifying them enables some behavior.
Anonymous (+kubebuilder:validation:MaxItems=2
): anonymous markers take
a single value as their argument.
Multi-option
(+kubebuilder:printcolumn:JSONPath=".status.replicas",name=Replicas,type=string
): multi-option
markers take one or more named arguments. The first argument is
separated from the name by a colon, and latter arguments are
comma-separated. Order of arguments doesn’t matter. Some arguments may
be optional.
Marker arguments may be strings, ints, bools, slices, or maps thereof.
Strings, ints, and bools follow their Go syntax:
// +kubebuilder:validation:ExclusiveMaximum=false
// +kubebuilder:validation:Format="date-time"
// +kubebuilder:validation:Maximum=42
For convenience, in simple cases the quotes may be omitted from strings,
although this is not encouraged for anything other than single-word
strings:
// +kubebuilder:validation:Type=string
Slices may be specified either by surrounding them with curly braces and
separating with commas:
// +kubebuilder:webhooks:Enum={"crackers, Gromit, we forgot the crackers!","not even wensleydale?"}
or, in simple cases, by separating with semicolons:
// +kubebuilder:validation:Enum=Wallace;Gromit;Chicken
Maps are specified with string keys and values of any type (effectively
map[string]interface{}
). A map is surrounded by curly braces ({}
),
each key and value is separated by a colon (:
), and each key-value
pair is separated by a comma:
// +kubebuilder:default={magic: {numero: 42, stringified: forty-two}}
These markers describe how to construct a custom resource definition from
a series of Go types and packages. Generation of the actual validation
schema is described by the validation markers .
See Generating CRDs for examples.
Show Detailed Argument Help // +groupName string specifies the API group name for this package. string // +kubebuilder:deprecatedversion warning string marks this version as deprecated. warning string message to be shown on the deprecated version // +kubebuilder:metadata annotations string labels string configures the additional annotations or labels for this CRD. For example adding annotation "api-approved.kubernetes.io" for a CRD with Kubernetes groups, or annotation "cert-manager.io/inject-ca-from-secret" for a CRD that needs CA injection. annotations string will be added into the annotations of this CRD. labels string will be added into the labels of this CRD. // +kubebuilder:printcolumn JSONPath string description string format string name string priority int type string adds a column to "kubectl get" output for this CRD. JSONPath string specifies the jsonpath expression used to extract the value of the column. description string specifies the help/description for this column. format string specifies the format of the column.
It may be any OpenAPI data format corresponding to the type, listed at https://github.com/OAI/OpenAPI-Specification/blob/master/versions/2.0.md#data-types.
name string specifies the name of the column. priority int indicates how important it is that this column be displayed.
Lower priority (higher numbered) columns will be hidden if the terminal width is too small.
type string indicates the type of the column.
It may be any OpenAPI data type listed at https://github.com/OAI/OpenAPI-Specification/blob/master/versions/2.0.md#data-types.
// +kubebuilder:resource categories string path string scope string shortName string singular string configures naming and scope for a CRD. categories string specifies which group aliases this resource is part of.
Group aliases are used to work with groups of resources at once. The most common one is "all" which covers about a third of the base resources in Kubernetes, and is generally used for "user-facing" resources.
path string specifies the plural "resource" for this CRD.
It generally corresponds to a plural, lower-cased version of the Kind. See https://book.kubebuilder.io/cronjob-tutorial/gvks.html.
scope string overrides the scope of the CRD (Cluster vs Namespaced).
Scope defaults to "Namespaced". Cluster-scoped ("Cluster") resources don't exist in namespaces.
shortName string specifies aliases for this CRD.
Short names are often used when people have work with your resource over and over again. For instance, "rs" for "replicaset" or "crd" for customresourcedefinition.
singular string overrides the singular form of your resource.
The singular form is otherwise defaulted off the plural (path).
// +kubebuilder:skip don't consider this package as an API version. // +kubebuilder:skipversion removes the particular version of the CRD from the CRDs spec.
This is useful if you need to skip generating and listing version entries for 'internal' resource versions, which typically exist if using the Kubernetes upstream conversion-gen tool.
// +kubebuilder:storageversion marks this version as the "storage version" for the CRD for conversion.
When conversion is enabled for a CRD (i.e. it's not a trivial-versions/single-version CRD), one version is set as the "storage version" to be stored in etcd. Attempting to store any other version will result in conversion to the storage version via a conversion webhook.
// +kubebuilder:subresource:scale selectorpath string specpath string statuspath string enables the "/scale" subresource on a CRD. selectorpath string specifies the jsonpath to the pod label selector field for the scale's status.
The selector field must be the string form (serialized form) of a selector. Setting a pod label selector is necessary for your type to work with the HorizontalPodAutoscaler.
specpath string specifies the jsonpath to the replicas field for the scale's spec. statuspath string specifies the jsonpath to the replicas field for the scale's status. // +kubebuilder:subresource:status enables the "/status" subresource on a CRD. // +kubebuilder:unservedversion does not serve this version.
This is useful if you need to drop support for a version in favor of a newer version.
// +versionName string overrides the API group version for this package (defaults to the package name). string
These markers modify how the CRD validation schema is produced for the
types and fields they modify. Each corresponds roughly to an OpenAPI/JSON
schema option.
See Generating CRDs for examples.
Show Detailed Argument Help // +kubebuilder:default any sets the default value for this field.
A default value will be accepted as any value valid for the field. Formatting for common types include: boolean: true
, string: Cluster
, numerical: 1.24
, array: {1,2}
, object: {policy: "delete"}
). Defaults should be defined in pruned form, and only best-effort validation will be performed. Full validation of a default requires submission of the containing CRD to an apiserver.
any // +kubebuilder:example any sets the example value for this field.
An example value will be accepted as any value valid for the field. Formatting for common types include: boolean: true
, string: Cluster
, numerical: 1.24
, array: {1,2}
, object: {policy: "delete"}
). Examples should be defined in pruned form, and only best-effort validation will be performed. Full validation of an example requires submission of the containing CRD to an apiserver.
any // +kubebuilder:validation:EmbeddedResource EmbeddedResource marks a fields as an embedded resource with apiVersion, kind and metadata fields.
An embedded resource is a value that has apiVersion, kind and metadata fields. They are validated implicitly according to the semantics of the currently running apiserver. It is not necessary to add any additional schema for these field, yet it is possible. This can be combined with PreserveUnknownFields.
// +kubebuilder:validation:Enum any specifies that this (scalar) field is restricted to the *exact* values specified here. any // +kubebuilder:validation:Enum any specifies that this (scalar) field is restricted to the *exact* values specified here. any // +kubebuilder:validation:ExclusiveMaximum bool indicates that the maximum is "up to" but not including that value. bool // +kubebuilder:validation:ExclusiveMaximum bool indicates that the maximum is "up to" but not including that value. bool // +kubebuilder:validation:ExclusiveMinimum bool indicates that the minimum is "up to" but not including that value. bool // +kubebuilder:validation:ExclusiveMinimum bool indicates that the minimum is "up to" but not including that value. bool // +kubebuilder:validation:Format string specifies additional "complex" formatting for this field.
For example, a date-time field would be marked as "type: string" and "format: date-time".
string // +kubebuilder:validation:Format string specifies additional "complex" formatting for this field.
For example, a date-time field would be marked as "type: string" and "format: date-time".
string // +kubebuilder:validation:MaxItems int specifies the maximum length for this list. int // +kubebuilder:validation:MaxItems int specifies the maximum length for this list. int // +kubebuilder:validation:MaxLength int specifies the maximum length for this string. int // +kubebuilder:validation:MaxLength int specifies the maximum length for this string. int // +kubebuilder:validation:MaxProperties int restricts the number of keys in an object int // +kubebuilder:validation:MaxProperties int restricts the number of keys in an object int // +kubebuilder:validation:Maximum specifies the maximum numeric value that this field can have. // +kubebuilder:validation:Maximum specifies the maximum numeric value that this field can have. // +kubebuilder:validation:MinItems int specifies the minimum length for this list. int // +kubebuilder:validation:MinItems int specifies the minimum length for this list. int // +kubebuilder:validation:MinLength int specifies the minimum length for this string. int // +kubebuilder:validation:MinLength int specifies the minimum length for this string. int // +kubebuilder:validation:MinProperties int restricts the number of keys in an object int // +kubebuilder:validation:MinProperties int restricts the number of keys in an object int // +kubebuilder:validation:Minimum specifies the minimum numeric value that this field can have. Negative numbers are supported. // +kubebuilder:validation:Minimum specifies the minimum numeric value that this field can have. Negative numbers are supported. // +kubebuilder:validation:MultipleOf specifies that this field must have a numeric value that's a multiple of this one. // +kubebuilder:validation:MultipleOf specifies that this field must have a numeric value that's a multiple of this one. // +kubebuilder:validation:Optional specifies that all fields in this package are optional by default. // +kubebuilder:validation:Optional specifies that this field is optional, if fields are required by default. // +kubebuilder:validation:Pattern string specifies that this string must match the given regular expression. string // +kubebuilder:validation:Pattern string specifies that this string must match the given regular expression. string // +kubebuilder:validation:Required specifies that all fields in this package are required by default. // +kubebuilder:validation:Required specifies that this field is required, if fields are optional by default. // +kubebuilder:validation:Schemaless marks a field as being a schemaless object.
Schemaless objects are not introspected, so you must provide any type and validation information yourself. One use for this tag is for embedding fields that hold JSONSchema typed objects. Because this field disables all type checking, it is recommended to be used only as a last resort.
// +kubebuilder:validation:Type string overrides the type for this field (which defaults to the equivalent of the Go type).
This generally must be paired with custom serialization. For example, the metav1.Time field would be marked as "type: string" and "format: date-time".
string // +kubebuilder:validation:Type string overrides the type for this field (which defaults to the equivalent of the Go type).
This generally must be paired with custom serialization. For example, the metav1.Time field would be marked as "type: string" and "format: date-time".
string // +kubebuilder:validation:UniqueItems bool specifies that all items in this list must be unique. bool // +kubebuilder:validation:UniqueItems bool specifies that all items in this list must be unique. bool // +kubebuilder:validation:XEmbeddedResource EmbeddedResource marks a fields as an embedded resource with apiVersion, kind and metadata fields.
An embedded resource is a value that has apiVersion, kind and metadata fields. They are validated implicitly according to the semantics of the currently running apiserver. It is not necessary to add any additional schema for these field, yet it is possible. This can be combined with PreserveUnknownFields.
// +kubebuilder:validation:XEmbeddedResource EmbeddedResource marks a fields as an embedded resource with apiVersion, kind and metadata fields.
An embedded resource is a value that has apiVersion, kind and metadata fields. They are validated implicitly according to the semantics of the currently running apiserver. It is not necessary to add any additional schema for these field, yet it is possible. This can be combined with PreserveUnknownFields.
// +kubebuilder:validation:XIntOrString IntOrString marks a fields as an IntOrString.
This is required when applying patterns or other validations to an IntOrString field. Knwon information about the type is applied during the collapse phase and as such is not normally available during marker application.
// +kubebuilder:validation:XIntOrString IntOrString marks a fields as an IntOrString.
This is required when applying patterns or other validations to an IntOrString field. Knwon information about the type is applied during the collapse phase and as such is not normally available during marker application.
// +kubebuilder:validation:XValidation message string rule string marks a field as requiring a value for which a given expression evaluates to true.
This marker may be repeated to specify multiple expressions, all of which must evaluate to true.
message string rule string // +kubebuilder:validation:XValidation message string rule string marks a field as requiring a value for which a given expression evaluates to true.
This marker may be repeated to specify multiple expressions, all of which must evaluate to true.
message string rule string // +nullable marks this field as allowing the "null" value.
This is often not necessary, but may be helpful with custom serialization.
// +optional specifies that this field is optional, if fields are required by default.
These markers help control how the Kubernetes API server processes API
requests involving your custom resources.
See Generating CRDs for examples.
Show Detailed Argument Help // +kubebuilder:pruning:PreserveUnknownFields PreserveUnknownFields stops the apiserver from pruning fields which are not specified.
By default the apiserver drops unknown fields from the request payload during the decoding step. This marker stops the API server from doing so. It affects fields recursively, but switches back to normal pruning behaviour if nested properties or additionalProperties are specified in the schema. This can either be true or undefined. False is forbidden.
NB: The kubebuilder:validation:XPreserveUnknownFields variant is deprecated in favor of the kubebuilder:pruning:PreserveUnknownFields variant. They function identically.
// +kubebuilder:pruning:PreserveUnknownFields PreserveUnknownFields stops the apiserver from pruning fields which are not specified.
By default the apiserver drops unknown fields from the request payload during the decoding step. This marker stops the API server from doing so. It affects fields recursively, but switches back to normal pruning behaviour if nested properties or additionalProperties are specified in the schema. This can either be true or undefined. False is forbidden.
NB: The kubebuilder:validation:XPreserveUnknownFields variant is deprecated in favor of the kubebuilder:pruning:PreserveUnknownFields variant. They function identically.
// +kubebuilder:validation:XPreserveUnknownFields PreserveUnknownFields stops the apiserver from pruning fields which are not specified.
By default the apiserver drops unknown fields from the request payload during the decoding step. This marker stops the API server from doing so. It affects fields recursively, but switches back to normal pruning behaviour if nested properties or additionalProperties are specified in the schema. This can either be true or undefined. False is forbidden.
NB: The kubebuilder:validation:XPreserveUnknownFields variant is deprecated in favor of the kubebuilder:pruning:PreserveUnknownFields variant. They function identically.
// +kubebuilder:validation:XPreserveUnknownFields PreserveUnknownFields stops the apiserver from pruning fields which are not specified.
By default the apiserver drops unknown fields from the request payload during the decoding step. This marker stops the API server from doing so. It affects fields recursively, but switches back to normal pruning behaviour if nested properties or additionalProperties are specified in the schema. This can either be true or undefined. False is forbidden.
NB: The kubebuilder:validation:XPreserveUnknownFields variant is deprecated in favor of the kubebuilder:pruning:PreserveUnknownFields variant. They function identically.
// +listMapKey string specifies the keys to map listTypes.
It indicates the index of a map list. They can be repeated if multiple keys must be used. It can only be used when ListType is set to map, and the keys should be scalar types.
string // +listMapKey string specifies the keys to map listTypes.
It indicates the index of a map list. They can be repeated if multiple keys must be used. It can only be used when ListType is set to map, and the keys should be scalar types.
string // +listType string specifies the type of data-structure that the list represents (map, set, atomic).
Possible data-structure types of a list are:
"map": it needs to have a key field, which will be used to build an associative list. A typical example is a the pod container list, which is indexed by the container name.
"set": Fields need to be "scalar", and there can be only one occurrence of each.
"atomic": All the fields in the list are treated as a single value, are typically manipulated together by the same actor.
string // +listType string specifies the type of data-structure that the list represents (map, set, atomic).
Possible data-structure types of a list are:
"map": it needs to have a key field, which will be used to build an associative list. A typical example is a the pod container list, which is indexed by the container name.
"set": Fields need to be "scalar", and there can be only one occurrence of each.
"atomic": All the fields in the list are treated as a single value, are typically manipulated together by the same actor.
string // +mapType string specifies the level of atomicity of the map; i.e. whether each item in the map is independent of the others, or all fields are treated as a single unit.
Possible values:
"granular": items in the map are independent of each other, and can be manipulated by different actors. This is the default behavior.
"atomic": all fields are treated as one unit. Any changes have to replace the entire map.
string // +mapType string specifies the level of atomicity of the map; i.e. whether each item in the map is independent of the others, or all fields are treated as a single unit.
Possible values:
"granular": items in the map are independent of each other, and can be manipulated by different actors. This is the default behavior.
"atomic": all fields are treated as one unit. Any changes have to replace the entire map.
string // +structType string specifies the level of atomicity of the struct; i.e. whether each field in the struct is independent of the others, or all fields are treated as a single unit.
Possible values:
"granular": fields in the struct are independent of each other, and can be manipulated by different actors. This is the default behavior.
"atomic": all fields are treated as one unit. Any changes have to replace the entire struct.
string // +structType string specifies the level of atomicity of the struct; i.e. whether each field in the struct is independent of the others, or all fields are treated as a single unit.
Possible values:
"granular": fields in the struct are independent of each other, and can be manipulated by different actors. This is the default behavior.
"atomic": all fields are treated as one unit. Any changes have to replace the entire struct.
string
These markers describe how webhook configuration is generated.
Use these to keep the description of your webhooks close to the code that
implements them.
Show Detailed Argument Help // +kubebuilder:webhook admissionReviewVersions string failurePolicy string groups string matchPolicy string mutating bool name string path string reinvocationPolicy string resources string sideEffects string verbs string versions string webhookVersions string specifies how a webhook should be served.
It specifies only the details that are intrinsic to the application serving it (e.g. the resources it can handle, or the path it serves on).
admissionReviewVersions string is an ordered list of preferred `AdmissionReview` versions the Webhook expects. failurePolicy string specifies what should happen if the API server cannot reach the webhook.
It may be either "ignore" (to skip the webhook and continue on) or "fail" (to reject the object in question).
groups string specifies the API groups that this webhook receives requests for. matchPolicy string defines how the "rules" list is used to match incoming requests. Allowed values are "Exact" (match only if it exactly matches the specified rule) or "Equivalent" (match a request if it modifies a resource listed in rules, even via another API group or version). mutating bool marks this as a mutating webhook (it's validating only if false)
Mutating webhooks are allowed to change the object in their response, and are called before all validating webhooks. Mutating webhooks may choose to reject an object, similarly to a validating webhook.
name string indicates the name of this webhook configuration. Should be a domain with at least three segments separated by dots path string specifies that path that the API server should connect to this webhook on. Must be prefixed with a '/validate-' or '/mutate-' depending on the type, and followed by $GROUP-$VERSION-$KIND where all values are lower-cased and the periods in the group are substituted for hyphens. For example, a validating webhook path for type batch.tutorial.kubebuilder.io/v1,Kind=CronJob would be /validate-batch-tutorial-kubebuilder-io-v1-cronjob reinvocationPolicy string allows mutating webhooks to request reinvocation after other mutations
To allow mutating admission plugins to observe changes made by other plugins, built-in mutating admission plugins are re-run if a mutating webhook modifies an object, and mutating webhooks can specify a reinvocationPolicy to control whether they are reinvoked as well.
resources string specifies the API resources that this webhook receives requests for. sideEffects string specify whether calling the webhook will have side effects. This has an impact on dry runs and `kubectl diff`: if the sideEffect is "Unknown" (the default) or "Some", then the API server will not call the webhook on a dry-run request and fails instead. If the value is "None", then the webhook has no side effects and the API server will call it on dry-run. If the value is "NoneOnDryRun", then the webhook is responsible for inspecting the "dryRun" property of the AdmissionReview sent in the request, and avoiding side effects if that value is "true." verbs string specifies the Kubernetes API verbs that this webhook receives requests for.
Only modification-like verbs may be specified. May be "create", "update", "delete", "connect", or "*" (for all).
versions string specifies the API versions that this webhook receives requests for. webhookVersions string specifies the target API versions of the {Mutating,Validating}WebhookConfiguration objects itself to generate. The only supported value is v1. Defaults to v1.
These markers control when DeepCopy
and runtime.Object
implementation
methods are generated.
These markers cause an RBAC
ClusterRole
to be generated. This allows you to describe the permissions that your
controller requires alongside the code that makes use of those
permissions.
Kubebuilder makes use of a tool called
controller-gen
for generating utility code and Kubernetes YAML. This code and config
generation is controlled by the presence of special “marker
comments” in Go code.
controller-gen is built out of different “generators” (which specify what
to generate) and “output rules” (which specify how and where to write the
results).
Both are configured through command line options specified in marker
format .
For instance, the following command:
controller-gen paths=./... crd:trivialVersions=true rbac:roleName=controller-perms output:crd:artifacts:config=config/crd/bases
generates CRDs and RBAC, and specifically stores the generated CRD YAML in
config/crd/bases
. For the RBAC, it uses the default output rules
(config/rbac
). It considers every package in the current directory tree
(as per the normal rules of the go ...
wildcard).
Each different generator is configured through a CLI option. Multiple
generators may be used in a single invocation of controller-gen
.
Show Detailed Argument Help // +webhook headerFile string year string generates (partial) {Mutating,Validating}WebhookConfiguration objects. headerFile string specifies the header text (e.g. license) to prepend to generated files. year string specifies the year to substitute for " YEAR" in the header file. // +schemapatch generateEmbeddedObjectMeta bool manifests string maxDescLen int patches existing CRDs with new schemata.
It will generate output for each "CRD Version" (API version of the CRD type itself) , e.g. apiextensions/v1) available.
generateEmbeddedObjectMeta bool specifies if any embedded ObjectMeta in the CRD should be generated manifests string contains the CustomResourceDefinition YAML files. maxDescLen int specifies the maximum description length for fields in CRD's OpenAPI schema.
0 indicates drop the description for all fields completely. n indicates limit the description to at most n characters and truncate the description to closest sentence boundary if it exceeds n characters.
// +rbac headerFile string roleName string year string generates ClusterRole objects. headerFile string specifies the header text (e.g. license) to prepend to generated files. roleName string sets the name of the generated ClusterRole. year string specifies the year to substitute for " YEAR" in the header file. // +object headerFile string year string generates code containing DeepCopy, DeepCopyInto, and DeepCopyObject method implementations. headerFile string specifies the header text (e.g. license) to prepend to generated files. year string specifies the year to substitute for " YEAR" in the header file. // +crd allowDangerousTypes bool crdVersions string generateEmbeddedObjectMeta bool headerFile string ignoreUnexportedFields bool maxDescLen int year string generates CustomResourceDefinition objects. allowDangerousTypes bool allows types which are usually omitted from CRD generation because they are not recommended.
Currently the following additional types are allowed when this is true: float32 float64
Left unspecified, the default is false
crdVersions string specifies the target API versions of the CRD type itself to generate. Defaults to v1.
Currently, the only supported value is v1.
The first version listed will be assumed to be the "default" version and will not get a version suffix in the output filename.
You'll need to use "v1" to get support for features like defaulting, along with an API server that supports it (Kubernetes 1.16+).
generateEmbeddedObjectMeta bool specifies if any embedded ObjectMeta in the CRD should be generated headerFile string specifies the header text (e.g. license) to prepend to generated files. ignoreUnexportedFields bool indicates that we should skip unexported fields.
Left unspecified, the default is false.
maxDescLen int specifies the maximum description length for fields in CRD's OpenAPI schema.
0 indicates drop the description for all fields completely. n indicates limit the description to at most n characters and truncate the description to closest sentence boundary if it exceeds n characters.
year string specifies the year to substitute for " YEAR" in the header file.
Output rules configure how a given generator outputs its results. There is
always one global “fallback” output rule (specified as output:<rule>
),
plus per-generator overrides (specified as output:<generator>:<rule>
).
When no fallback rule is specified manually, a set of default
per-generator rules are used which result in YAML going to
config/<generator>
, and code staying where it belongs.
The default rules are equivalent to
output:<generator>:artifacts:config=config/<generator>
for each
generator.
When a “fallback” rule is specified, that’ll be used instead of the
default rules.
For example, if you specify crd rbac:roleName=controller-perms output:crd:stdout
, you’ll get CRDs on standard out, and rbac in a file in
config/rbac
. If you were to add in a global rule instead, like crd rbac:roleName=controller-perms output:crd:stdout output:none
, you’d get
CRDs to standard out, and everything else to /dev/null, because we’ve
explicitly specified a fallback.
For brevity, the per-generator output rules (output:<generator>:<rule>
)
are omitted below. They are equivalent to the global fallback options
listed here.
The Kubebuilder completion script can be generated with the command kubebuilder completion [bash|fish|powershell|zsh]
.
Note that sourcing the completion script in your shell enables Kubebuilder autocompletion.
The completion Bash script depends on bash-completion , which means that you have to install this software first (you can test if you have bash-completion already installed). Also, ensure that your Bash version is 4.1+.
Once installed, go ahead and add the path /usr/local/bin/bash
in the /etc/shells
.
echo “/usr/local/bin/bash” > /etc/shells
Make sure to use installed shell by current user.
chsh -s /usr/local/bin/bash
Add following content in /.bash_profile or ~/.bashrc
# kubebuilder autocompletion
if [ -f /usr/local/share/bash-completion/bash_completion ]; then
. /usr/local/share/bash-completion/bash_completion
fi
. <(kubebuilder completion bash)
Restart terminal for the changes to be reflected or source
the changed bash file.
Kubebuilder publishes test binaries and container images in addition
to the main binary releases.
You can find test binary tarballs for all Kubernetes versions and host platforms at https://go.kubebuilder.io/test-tools
.
You can find a test binary tarball for a particular Kubernetes version and host platform at https://go.kubebuilder.io/test-tools/${version}/${os}/${arch}
.
You can find all container image versions for a particular platform at https://go.kubebuilder.io/images/${os}/${arch}
or at gcr.io/kubebuilder/thirdparty-${os}-${arch}
.
You can find the container image for a particular Kubernetes version and host platform at https://go.kubebuilder.io/images/${os}/${arch}/${version}
or at gcr.io/kubebuilder/thirdparty-${os}-${arch}:${version}
.
Kubebuilder produces solutions that by default can work on multiple platforms or specific ones, depending on how you
build and configure your workloads. This guide aims to help you properly configure your projects according to your needs.
To provide support on specific or multiple platforms, you must ensure that all images used in workloads are built to
support the desired platforms. Note that they may not be the same as the platform where you develop your solutions and use KubeBuilder, but instead the platform(s) where your solution should run and be distributed.
It is recommended to build solutions that work on multiple platforms so that your project works
on any Kubernetes cluster regardless of the underlying operating system and architecture.
The following covers what you need to do to provide the support for one or more platforms or architectures.
The images used in workloads such as in your Pods/Deployments will need to provide the support for this other platform.
You can inspect the images using a ManifestList of supported platforms using the command
docker manifest inspect , i.e.:
$ docker manifest inspect myresgystry/example/myimage:v0.0.1
{
"schemaVersion": 2,
"mediaType": "application/vnd.docker.distribution.manifest.list.v2+json",
"manifests": [
{
"mediaType": "application/vnd.docker.distribution.manifest.v2+json",
"size": 739,
"digest": "sha256:a274a1a2af811a1daf3fd6b48ff3d08feb757c2c3f3e98c59c7f85e550a99a32",
"platform": {
"architecture": "arm64",
"os": "linux"
}
},
{
"mediaType": "application/vnd.docker.distribution.manifest.v2+json",
"size": 739,
"digest": "sha256:d801c41875f12ffd8211fffef2b3a3d1a301d99f149488d31f245676fa8bc5d9",
"platform": {
"architecture": "amd64",
"os": "linux"
}
},
{
"mediaType": "application/vnd.docker.distribution.manifest.v2+json",
"size": 739,
"digest": "sha256:f4423c8667edb5372fb0eafb6ec599bae8212e75b87f67da3286f0291b4c8732",
"platform": {
"architecture": "s390x",
"os": "linux"
}
},
{
"mediaType": "application/vnd.docker.distribution.manifest.v2+json",
"size": 739,
"digest": "sha256:621288f6573c012d7cf6642f6d9ab20dbaa35de3be6ac2c7a718257ec3aff333",
"platform": {
"architecture": "ppc64le",
"os": "linux"
}
},
]
}
Kubernetes provides a mechanism called nodeAffinity which can be used to limit the possible node
targets where a pod can be scheduled. This is especially important to ensure correct scheduling behavior in clusters
with nodes that span across multiple platforms (i.e. heterogeneous clusters).
Kubernetes manifest example
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/arch
operator: In
values:
- amd64
- arm64
- ppc64le
- s390x
- key: kubernetes.io/os
operator: In
values:
- linux
Golang Example
Template: corev1.PodTemplateSpec{
...
Spec: corev1.PodSpec{
Affinity: &corev1.Affinity{
NodeAffinity: &corev1.NodeAffinity{
RequiredDuringSchedulingIgnoredDuringExecution: &corev1.NodeSelector{
NodeSelectorTerms: []corev1.NodeSelectorTerm{
{
MatchExpressions: []corev1.NodeSelectorRequirement{
{
Key: "kubernetes.io/arch",
Operator: "In",
Values: []string{"amd64"},
},
{
Key: "kubernetes.io/os",
Operator: "In",
Values: []string{"linux"},
},
},
},
},
},
},
},
SecurityContext: &corev1.PodSecurityContext{
...
},
Containers: []corev1.Container{{
...
}},
},
You can look for some code examples by checking the code which is generated via the Deploy
Image plugin. (More info )
You can use docker buildx
to cross-compile via emulation (QEMU ) to build the manager image.
See that projects scaffold with the latest versions of Kubebuilder have the Makefile target docker-buildx
.
Example of Usage
$ make docker-buildx IMG=myregistry/myoperator:v0.0.1
Note that you need to ensure that all images and workloads required and used by your project will provide the same
support as recommended above, and that you properly configure the nodeAffinity for all your workloads.
Therefore, ensure that you uncomment the following code in the config/manager/manager.yaml
file
# TODO(user): Uncomment the following code to configure the nodeAffinity expression
# according to the platforms which are supported by your solution.
# It is considered best practice to support multiple architectures. You can
# build your manager image using the makefile target docker-buildx.
# affinity:
# nodeAffinity:
# requiredDuringSchedulingIgnoredDuringExecution:
# nodeSelectorTerms:
# - matchExpressions:
# - key: kubernetes.io/arch
# operator: In
# values:
# - amd64
# - arm64
# - ppc64le
# - s390x
# - key: kubernetes.io/os
# operator: In
# values:
# - linux
You will probably want to automate the releases of your projects to ensure that the images are always built for the
same platforms. Note that Goreleaser also supports docker buildx . See its documentation for more detail.
Also, you may want to configure GitHub Actions, Prow jobs, or any other solution that you use to build images to
provide multi-platform support. Note that you can also use other options like docker manifest create
to customize
your solutions to achieve the same goals with other tools.
By using Docker and the target provided by default you should NOT change the Dockerfile to use
any specific GOOS and GOARCH to build the manager binary. However, if you are looking for to
customize the default scaffold and create your own implementations you might want to give
a look in the Golang doc to knows
its available options.
Projects created with the Kubebuilder CLI have two workloads which are:
The container to run the manager implementation is configured in the config/manager/manager.yaml
file.
This image is built with the Dockerfile file scaffolded by default and contains the binary of the project
which will be built via the command go build -a -o manager main.go
.
Note that when you run make docker-build
OR make docker-build IMG=myregistry/myprojectname:<tag>
an image will be built from the client host (local environment) and produce an image for
the client os/arch, which is commonly linux/amd64 or linux/arm64.
If you are running from an Mac Os environment then, Docker also will consider it as linux/$arch. Be aware that
when, for example, is running Kind on a Mac OS operational system the nodes will
end up labeled with kubernetes.io/os=linux
A workload will be created to run the image gcr.io/kubebuilder/kube-rbac-proxy: which is
configured in the config/default/manager_auth_proxy_patch.yaml
manifest. It is a side-car proxy whose purpose
is to protect the manager from malicious attacks. You can learn more about its motivations by looking at
the README of this project github.com/brancz/kube-rbac-proxy .
Kubebuilder has been building this image with support for multiple architectures by default.( Check it here ).
If you need to address any edge case scenario where you want to produce a project that
only provides support for a specific architecture platform, you can customize your
configuration manifests to use the specific architecture types built for this image.
This part describes how to modify a scaffolded project for use with multiple go.mod
files for APIs and Controllers.
Sub-Module Layouts (in a way you could call them a special form of Monorepo’s ) are a special use case and can help in scenarios that involve reuse of APIs without introducing indirect dependencies that should not be available in the project consuming the API externally.
If you are looking to do operations and reconcile via a controller a Type(CRD) which are owned by another project then, please see Using an external Type for more info.
Separate go.mod
modules for APIs and Controllers can help for the following cases:
There is an enterprise version of an operator available that wants to reuse APIs from the Community Version
There are many (possibly external) modules depending on the API and you want to have a more strict separation of transitive dependencies
If you want to reduce impact of transitive dependencies on your API being included in other projects
If you are looking to separately manage the lifecycle of your API release process from your controller release process.
If you are looking to modularize your codebase without splitting your code between multiple repositories.
They introduce however multiple caveats into typical projects which is one of the main factors that makes them hard to recommend in a generic use-case or plugin:
Multiple go.mod
modules are not recommended as a go best practice and multiple modules are mostly discouraged
There is always the possibility to extract your APIs into a new repository and arguably also have more control over the release process in a project spanning multiple repos relying on the same API types.
It requires at least one replace directive either through go.work
which is at least 2 more files plus an environment variable for build environments without GO_WORK or through go.mod
replace, which has to be manually dropped and added for every release.
When deciding to deviate from the standard kubebuilder PROJECT
setup or the extended layouts offered by its plugins, it can result in increased maintenance overhead as there can be breaking changes in upstream that could break with the custom module structure described here.
Splitting your codebase to multiple repos and/or multiple modules incurs costs that will grow over time. You’ll need to define clear version dependencies between your own modules, do phased upgrades carefully, etc. Especially for small-to-medium projects, one repo and one module is the best way to go.
Bear in mind, that it is not recommended to deviate from the proposed layout unless you know what you are doing.
You may also lose the ability to use some of the CLI features and helpers. For further information on the project layout, see the doc [What’s in a basic project?][basic-project-doc]
For a proper Sub-Module layout, we will use the generated APIs as a starting point.
For the steps below, we will assume you created your project in your GOPATH
with
kubebuilder init
and created an API & controller with
kubebuilder create api --group operator --version v1alpha1 --kind Sample --resource --controller --make
Now that we have a base layout in place, we will enable you for multiple modules.
Navigate to api/v1alpha1
Run go mod init
to create a new submodule
Run go mod tidy
to resolve the dependencies
Your api go.mod file could now look like this:
module YOUR_GO_PATH/test-operator/api/v1alpha1
go 1.21.0
require (
k8s.io/apimachinery v0.28.4
sigs.k8s.io/controller-runtime v0.16.3
)
require (
github.com/go-logr/logr v1.2.4 // indirect
github.com/gogo/protobuf v1.3.2 // indirect
github.com/google/gofuzz v1.2.0 // indirect
github.com/json-iterator/go v1.1.12 // indirect
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect
github.com/modern-go/reflect2 v1.0.2 // indirect
golang.org/x/net v0.17.0 // indirect
golang.org/x/text v0.13.0 // indirect
gopkg.in/inf.v0 v0.9.1 // indirect
gopkg.in/yaml.v2 v2.4.0 // indirect
k8s.io/klog/v2 v2.100.1 // indirect
k8s.io/utils v0.0.0-20230406110748-d93618cff8a2 // indirect
sigs.k8s.io/json v0.0.0-20221116044647-bc3834ca7abd // indirect
sigs.k8s.io/structured-merge-diff/v4 v4.2.3 // indirect
)
As you can see it only includes apimachinery and controller-runtime as dependencies and any dependencies you have
declared in your controller are not taken over into the indirect imports.
When trying to resolve your main module in the root folder of the operator, you will notice an error if you use a VCS path:
go mod tidy
go: finding module for package YOUR_GO_PATH/test-operator/api/v1alpha1
YOUR_GO_PATH/test-operator imports
YOUR_GO_PATH/test-operator/api/v1alpha1: cannot find module providing package YOUR_GO_PATH/test-operator/api/v1alpha1: module YOUR_GO_PATH/test-operator/api/v1alpha1: git ls-remote -q origin in LOCALVCSPATH: exit status 128:
remote: Repository not found.
fatal: repository 'https://YOUR_GO_PATH/test-operator/' not found
The reason for this is that you may have not pushed your modules into the VCS yet and resolving the main module will fail as it can no longer
directly access the API types as a package but only as a module.
To solve this issue, we will have to tell the go tooling to properly replace
the API module with a local reference to your path.
You can do this with 2 different approaches: go modules and go workspaces.
For go modules, you will edit the main go.mod
file of your project and issue a replace directive.
You can do this by editing the go.mod
with
``
go mod edit -require YOUR_GO_PATH/test-operator/api/v1alpha1@v0.0.0 # Only if you didn't already resolve the module
go mod edit -replace YOUR_GO_PATH/test-operator/api/v1alpha1@v0.0.0=./api/v1alpha1
go mod tidy
Note that we used the placeholder version v0.0.0
of the API Module. In case you already released your API module once,
you can use the real version as well. However this will only work if the API Module is already available in the VCS.
Since the main go.mod
file now has a replace directive, it is important to drop it again before releasing your controller module.
To achieve this you can simply run
go mod edit -dropreplace YOUR_GO_PATH/test-operator/api/v1alpha1
go mod tidy
For go workspaces, you will not edit the go.mod
files yourself, but rely on the workspace support in go.
To initialize a workspace for your project, run go work init
in the project root.
Now let us include both modules in our workspace:
go work use . # This includes the main module with the controller
go work use api/v1alpha1 # This is the API submodule
go work sync
This will lead to commands such as go run
or go build
to respect the workspace and make sure that local resolution is used.
You will be able to work with this locally without having to build your module.
When using go.work
files, it is recommended to not commit them into the repository and add them to .gitignore
.
go.work
go.work.sum
When releasing with a present go.work
file, make sure to set the environment variable GOWORK=off
(verifiable with go env GOWORK
) to make sure the release process does not get impeded by a potentially commited go.work
file.
When building your controller image, kubebuilder by default is not able to work with multiple modules.
You will have to manually add the new API module into the download of dependencies:
# Build the manager binary
FROM golang:1.20 as builder
ARG TARGETOS
ARG TARGETARCH
WORKDIR /workspace
# Copy the Go Modules manifests
COPY go.mod go.mod
COPY go.sum go.sum
# Copy the Go Sub-Module manifests
COPY api/v1alpha1/go.mod api/go.mod
COPY api/v1alpha1/go.sum api/go.sum
# cache deps before building and copying source so that we don't need to re-download as much
# and so that source changes don't invalidate our downloaded layer
RUN go mod download
# Copy the go source
COPY cmd/main.go cmd/main.go
COPY api/ api/
COPY internal/controller/ internal/controller/
# Build
# the GOARCH has not a default value to allow the binary be built according to the host where the command
# was called. For example, if we call make docker-build in a local env which has the Apple Silicon M1 SO
# the docker BUILDPLATFORM arg will be linux/arm64 when for Apple x86 it will be linux/amd64. Therefore,
# by leaving it empty we can ensure that the container and binary shipped on it will have the same platform.
RUN CGO_ENABLED=0 GOOS=${TARGETOS:-linux} GOARCH=${TARGETARCH} go build -a -o manager cmd/main.go
# Use distroless as minimal base image to package the manager binary
# Refer to https://github.com/GoogleContainerTools/distroless for more details
FROM gcr.io/distroless/static:nonroot
WORKDIR /
COPY --from=builder /workspace/manager .
USER 65532:65532
ENTRYPOINT ["/manager"]
Because you adjusted the default layout, before releasing your first version of your operator, make sure to familiarize yourself with mono-repo/multi-module releases with multiple go.mod
files in different subdirectories.
Assuming a single API was created, the release process could look like this:
git commit
git tag v1.0.0 # this is your main module release
git tag api/v1.0.0 # this is your api release
go mod edit -require YOUR_GO_PATH/test-operator/api@v1.0.0 # now we depend on the api module in the main module
go mod edit -dropreplace YOUR_GO_PATH/test-operator/api/v1alpha1 # this will drop the replace directive for local development in case you use go modules, meaning the sources from the VCS will be used instead of the ones in your monorepo checked out locally.
git push origin main v1.0.0 api/v1.0.0
After this, your modules will be available in VCS and you do not need a local replacement anymore. However if youre making local changes,
make sure to adopt your behavior with replace
directives accordingly.
Whenever you want to reuse your API module with a separate kubebuilder, we will assume you follow the guide for using an external Type .
When you get to the step Edit the API files
simply import the dependency with
go get YOUR_GO_PATH/test-operator/api@v1.0.0
and then use it as explained in the guide.
There are several different external types that may be referenced when writing a controller.
Custom Resource Definitions (CRDs) that are defined in the current project (such as via kubebuilder create api
).
Core Kubernetes Resources (e.g. Deployments or Pods).
CRDs that are created and installed in another project.
A custom API defined via the aggregation layer, served by an extension API server for which the primary API server acts as a proxy.
Currently, kubebuilder handles the first two, CRDs and Core Resources, seamlessly. You must scaffold the latter two, External CRDs and APIs created via aggregation, manually.
In order to use a Kubernetes Custom Resource that has been defined in another project
you will need to have several items of information.
The Domain of the CR
The Group under the Domain
The Go import path of the CR Type definition
The Custom Resource Type you want to depend on.
The Domain and Group variables have been discussed in other parts of the documentation. The import path would be located in the project that installs the CR.
The Custom Resource Type is usually a Go Type of the same name as the CustomResourceDefinition in kubernetes, e.g. for a Pod
there will be a type Pod
in the v1
group.
For Kubernetes Core Types, the domain can be omitted.
``
This document uses my
and their
prefixes as a naming convention for repos, groups, and types to clearly distinguish between your own project and the external one you are referencing.
In our example we will assume the following external API Type:
github.com/theiruser/theirproject
is another kubebuilder project on whose CRD we want to depend and extend on.
Thus, it contains a go.mod
in its repository root. The import path for the go types would be github.com/theiruser/theirproject/api/theirgroup/v1alpha1
.
The Domain of the CR is theirs.com
, the Group is theirgroup
and the kind and go type would be ExternalType
.
If there is an interest to have multiple Controllers running in different Groups (e.g. because one is an owned CRD and one is an external Type), please first
reconfigure the Project to use a multi-group layout as described in the Multi-Group documentation .
The following guide assumes that you have already created a project using kubebuilder init
in a directory in the GOPATH. Please reference the Getting Started Guide for more information.
Note that if you did not pass --domain
to kubebuilder init
you will need to modify it for the individual api types as the default is my.domain
, not theirs.com
.
Similarly, if you intend to use your own domain, please configure your own domain with kubebuilder init
and do not use `theirs.com for the domain.
Run the command create api
to scaffold only the controller to manage the external type:
kubebuilder create api --group <theirgroup> --version v1alpha1 --kind <ExternalTypeKind> --controller --resource=false
Note that the resource
argument is set to false, as we are not attempting to create our own CustomResourceDefinition,
but instead rely on an external one.
This will result in a PROJECT
entry with the default domain of the PROJECT
(my.domain
if not specified in kubebuilder init
).
For use of other domains, such as theirs.com
, one will have to manually adjust the PROJECT
file with the correct domain for the entry:
If you are looking to create Controllers to manage Kubernetes Core types (i.e. Deployments/Pods)y
you do not need to update the PROJECT file or register the Schema in the manager. All Core Types are registered by default. The Kubebuilder CLI will add the required values to the PROJECT file, but you still need to perform changes to the RBAC markers manually to ensure that the Rules will be generated accordingly.
file: PROJECT
domain: my.domain
layout:
- go.kubebuilder.io/v4
projectName: testkube
repo: example.com
resources:
- controller: true
domain: my.domain ## <- Replace the domain with theirs.com domain
group: mygroup
kind: ExternalType
version: v1alpha1
version: "3"
At the same time, the generated RBAC manifests need to be adjusted:
file: internal/controller/externaltype_controller.go
// ExternalTypeReconciler reconciles a ExternalType object
type ExternalTypeReconciler struct {
client.Client
Scheme *runtime.Scheme
}
// external types can be added like this
//+kubebuilder:rbac:groups=theirgroup.theirs.com,resources=externaltypes,verbs=get;list;watch;create;update;patch;delete
//+kubebuilder:rbac:groups=theirgroup.theirs.com,resources=externaltypes/status,verbs=get;update;patch
//+kubebuilder:rbac:groups=theirgroup.theirs.com,resources=externaltypes/finalizers,verbs=update
// core types can be added like this
//+kubebuilder:rbac:groups=core,resources=pods,verbs=get;list;watch;create;update;patch;delete
//+kubebuilder:rbac:groups=core,resources=pods/status,verbs=get;list;watch;create;update;patch;delete
//+kubebuilder:rbac:groups=core,resources=pods/finalizers,verbs=update
Note that this is only valid for external types and not the kubernetes core types.
Core types such as pods or nodes are registered by default in the scheme.
Edit the following lines to the main.go file to register the external types:
file: cmd/main.go
package apis
import (
theirgroupv1alpha1 "github.com/theiruser/theirproject/apis/theirgroup/v1alpha1"
)
func init() {
utilruntime.Must(clientgoscheme.AddToScheme(scheme))
utilruntime.Must(theirgroupv1alpha1.AddToScheme(scheme)) // this contains the external API types
//+kubebuilder:scaffold:scheme
}
file: internal/controllers/externaltype_controllers.go
package controllers
import (
theirgroupv1alpha1 "github.com/theiruser/theirproject/apis/theirgroup/v1alpha1"
)
//...
// SetupWithManager sets up the controller with the Manager.
func (r *ExternalTypeReconciler) SetupWithManager(mgr ctrl.Manager) error {
return ctrl.NewControllerManagedBy(mgr).
For(&theirgroupv1alpha1.ExternalType{}).
Complete(r)
}
Note that core resources may simply be imported by depending on the API’s from upstream Kubernetes and do not need additional AddToScheme
registrations:
file: internal/controllers/externaltype_controllers.go
package controllers
// contains core resources like Deployment
import (
v1 "k8s.io/api/apps/v1"
)
// SetupWithManager sets up the controller with the Manager.
func (r *ExternalTypeReconciler) SetupWithManager(mgr ctrl.Manager) error {
return ctrl.NewControllerManagedBy(mgr).
For(&v1.Pod{}).
Complete(r)
}
go mod tidy
make manifests
Edit the CRDDirectoryPaths
in your test suite and add the correct AddToScheme
entry during suite initialization:
file: internal/controllers/suite_test.go
package controller
import (
"fmt"
"path/filepath"
"runtime"
"testing"
. "github.com/onsi/ginkgo/v2"
. "github.com/onsi/gomega"
"k8s.io/client-go/kubernetes/scheme"
"k8s.io/client-go/rest"
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/envtest"
logf "sigs.k8s.io/controller-runtime/pkg/log"
"sigs.k8s.io/controller-runtime/pkg/log/zap"
//+kubebuilder:scaffold:imports
theirgroupv1alpha1 "github.com/theiruser/theirproject/apis/theirgroup/v1alpha1"
)
var cfg *rest.Config
var k8sClient client.Client
var testEnv *envtest.Environment
func TestControllers(t *testing.T) {
RegisterFailHandler(Fail)
RunSpecs(t, "Controller Suite")
}
var _ = BeforeSuite(func() {
//...
By("bootstrapping test environment")
testEnv = &envtest.Environment{
CRDDirectoryPaths: []string{
// if you are using vendoring and rely on a kubebuilder based project, you can simply rely on the vendored config directory
filepath.Join("..", "..", "..", "vendor", "github.com", "theiruser", "theirproject", "config", "crds"),
// otherwise you can simply download the CRD from any source and place it within the config/crd/bases directory,
filepath.Join("..", "..", "config", "crd", "bases"),
},
ErrorIfCRDPathMissing: false,
// The BinaryAssetsDirectory is only required if you want to run the tests directly
// without call the makefile target test. If not informed it will look for the
// default path defined in controller-runtime which is /usr/local/kubebuilder/.
// Note that you must have the required binaries setup under the bin directory to perform
// the tests directly. When we run make test it will be setup and used automatically.
BinaryAssetsDirectory: filepath.Join("..", "..", "bin", "k8s",
fmt.Sprintf("1.28.3-%s-%s", runtime.GOOS, runtime.GOARCH)),
}
var err error
// cfg is defined in this file globally.
cfg, err = testEnv.Start()
Expect(err).NotTo(HaveOccurred())
Expect(cfg).NotTo(BeNil())
//+kubebuilder:scaffold:scheme
Expect(theirgroupv1alpha1.AddToScheme(scheme.Scheme)).To(Succeed())
k8sClient, err = client.New(cfg, client.Options{Scheme: scheme.Scheme})
Expect(err).NotTo(HaveOccurred())
Expect(k8sClient).NotTo(BeNil())
})
Since we are now using external types, you will now have to rely on them being installed into the cluster.
If the APIs are not available at the time the manager starts, all informers listening to the non-available types
will fail, causing the manager to exit with an error similar to
failed to get informer from cache {"error": "Timeout: failed waiting for *v1alpha1.ExternalType Informer to sync"}
This will signal that the API Server is not yet ready to serve the external types.
The following kubectl commands may be useful
kubectl api-resources --verbs=list -o name
kubectl api-resources --verbs=list -o name | grep my.domain
The controller-runtime/pkg/envtest
Go library helps write integration tests for your controllers by setting up and starting an instance of etcd and the
Kubernetes API server, without kubelet, controller-manager or other components.
Installing the binaries is as a simple as running make envtest
. envtest
will download the Kubernetes API server binaries to the bin/
folder in your project
by default. make test
is the one-stop shop for downloading the binaries, setting up the test environment, and running the tests.
The make targets require bash
to run.
If you would like to download the tarball containing the binaries, to use in a disconnected environment you can use
setup-envtest
to download the required binaries locally. There are a lot of ways to configure setup-envtest
to avoid talking to
the internet you can read about them here .
The examples below will show how to install the Kubernetes API binaries using mostly defaults set by setup-envtest
.
make envtest
will download the setup-envtest
binary to ./bin/
.
make envtest
Installing the binaries using setup-envtest
stores the binary in OS specific locations, you can read more about them
here
./bin/setup-envtest use 1.21.2
Once these binaries are installed, change the test
make target to include a -i
like below. -i
will only check for locally installed
binaries and not reach out to remote resources. You could also set the ENVTEST_INSTALLED_ONLY
env variable.
test: manifests generate fmt vet
KUBEBUILDER_ASSETS="$(shell $(ENVTEST) use $(ENVTEST_K8S_VERSION) -i --bin-dir $(LOCALBIN) -p path)" go test ./... -coverprofile cover.out
NOTE: The ENVTEST_K8S_VERSION
needs to match the setup-envtest
you downloaded above. Otherwise, you will see an error like the below
no such version (1.24.5) exists on disk for this architecture (darwin/amd64) -- try running `list -i` to see what's on disk
There have been many reports of the kube-apiserver
or etcd
binary hanging during cleanup
or misbehaving in other ways. We recommend using the 1.19.2 tools version to circumvent such issues,
which do not seem to arise in 1.22+. This is likely NOT the cause of a fork/exec: permission denied
or fork/exec: not found
error, which is caused by improper tools installation.
Using envtest
in integration tests follows the general flow of:
import sigs.k8s.io/controller-runtime/pkg/envtest
//specify testEnv configuration
testEnv = &envtest.Environment{
CRDDirectoryPaths: []string{filepath.Join("..", "config", "crd", "bases")},
}
//start testEnv
cfg, err = testEnv.Start()
//write test logic
//stop testEnv
err = testEnv.Stop()
kubebuilder
does the boilerplate setup and teardown of testEnv for you, in the ginkgo test suite that it generates under the /controllers
directory.
Logs from the test runs are prefixed with test-env
.
You can use the plugin DeployImage to check examples. This plugin allows users to scaffold API/Controllers to deploy and manage an Operand (image) on the cluster following the guidelines and best practices. It abstracts the complexities of achieving this goal while allowing users to customize the generated code.
Therefore, you can check that a test using ENV TEST will be generated for the controller which has the purpose to ensure that the Deployment is created successfully. You can see an example of its code implementation under the testdata
directory with the DeployImage samples here .
Controller-runtime’s envtest framework requires kubectl
, kube-apiserver
, and etcd
binaries be present locally to simulate the API portions of a real cluster.
The make test
command will install these binaries to the bin/
directory and use them when running tests that use envtest
.
Ie,
./bin/k8s/
└── 1.25.0-darwin-amd64
├── etcd
├── kube-apiserver
└── kubectl
1 directory, 3 files
You can use environment variables and/or flags to specify the kubectl
,api-server
and etcd
setup within your integration tests.
Variable name Type When to use
USE_EXISTING_CLUSTER
boolean Instead of setting up a local control plane, point to the control plane of an existing cluster.
KUBEBUILDER_ASSETS
path to directory Point integration tests to a directory containing all binaries (api-server, etcd and kubectl).
TEST_ASSET_KUBE_APISERVER
, TEST_ASSET_ETCD
, TEST_ASSET_KUBECTL
paths to, respectively, api-server, etcd and kubectl binaries Similar to KUBEBUILDER_ASSETS
, but more granular. Point integration tests to use binaries other than the default ones. These environment variables can also be used to ensure specific tests run with expected versions of these binaries.
KUBEBUILDER_CONTROLPLANE_START_TIMEOUT
and KUBEBUILDER_CONTROLPLANE_STOP_TIMEOUT
durations in format supported by time.ParseDuration
Specify timeouts different from the default for the test control plane to (respectively) start and stop; any test run that exceeds them will fail.
KUBEBUILDER_ATTACH_CONTROL_PLANE_OUTPUT
boolean Set to true
to attach the control plane’s stdout and stderr to os.Stdout and os.Stderr. This can be useful when debugging test failures, as output will include output from the control plane.
See that the test
makefile target will ensure that all is properly setup when you are using it. However, if you would like to run the tests without use the Makefile targets, for example via an IDE, then you can set the environment variables directly in the code of your suite_test.go
:
var _ = BeforeSuite(func(done Done) {
Expect(os.Setenv("TEST_ASSET_KUBE_APISERVER", "../bin/k8s/1.25.0-darwin-amd64/kube-apiserver")).To(Succeed())
Expect(os.Setenv("TEST_ASSET_ETCD", "../bin/k8s/1.25.0-darwin-amd64/etcd")).To(Succeed())
Expect(os.Setenv("TEST_ASSET_KUBECTL", "../bin/k8s/1.25.0-darwin-amd64/kubectl")).To(Succeed())
// OR
Expect(os.Setenv("KUBEBUILDER_ASSETS", "../bin/k8s/1.25.0-darwin-amd64")).To(Succeed())
logf.SetLogger(zap.New(zap.WriteTo(GinkgoWriter), zap.UseDevMode(true)))
testenv = &envtest.Environment{}
_, err := testenv.Start()
Expect(err).NotTo(HaveOccurred())
close(done)
}, 60)
var _ = AfterSuite(func() {
Expect(testenv.Stop()).To(Succeed())
Expect(os.Unsetenv("TEST_ASSET_KUBE_APISERVER")).To(Succeed())
Expect(os.Unsetenv("TEST_ASSET_ETCD")).To(Succeed())
Expect(os.Unsetenv("TEST_ASSET_KUBECTL")).To(Succeed())
})
You can look at the controller-runtime docs to know more about its configuration options, see here . On top of that, if you are
looking to use ENV TEST to test your webhooks then you might want to give a look at its install options .
Here’s an example of modifying the flags with which to start the API server in your integration tests, compared to the default values in envtest.DefaultKubeAPIServerFlags
:
customApiServerFlags := []string{
"--secure-port=6884",
"--admission-control=MutatingAdmissionWebhook",
}
apiServerFlags := append([]string(nil), envtest.DefaultKubeAPIServerFlags...)
apiServerFlags = append(apiServerFlags, customApiServerFlags...)
testEnv = &envtest.Environment{
CRDDirectoryPaths: []string{filepath.Join("..", "config", "crd", "bases")},
KubeAPIServerFlags: apiServerFlags,
}
Unless you’re using an existing cluster, keep in mind that no built-in controllers are running in the test context. In some ways, the test control plane will behave differently from “real” clusters, and that might have an impact on how you write tests. One common example is garbage collection; because there are no controllers monitoring built-in resources, objects do not get deleted, even if an OwnerReference
is set up.
To test that the deletion lifecycle works, test the ownership instead of asserting on existence. For example:
expectedOwnerReference := v1.OwnerReference{
Kind: "MyCoolCustomResource",
APIVersion: "my.api.example.com/v1beta1",
UID: "d9607e19-f88f-11e6-a518-42010a800195",
Name: "userSpecifiedResourceName",
}
Expect(deployment.ObjectMeta.OwnerReferences).To(ContainElement(expectedOwnerReference))
EnvTest does not support namespace deletion. Deleting a namespace will seem to succeed, but the namespace will just be put in a Terminating state, and never actually be reclaimed. Trying to recreate the namespace will fail. This will cause your reconciler to continue reconciling any objects left behind, unless they are deleted.
To overcome this limitation you can create a new namespace for each test. Even so, when one test completes (e.g. in “namespace-1”) and another test starts (e.g. in “namespace-2”), the controller will still be reconciling any active objects from “namespace-1”. This can be avoided by ensuring that all tests clean up after themselves as part of the test teardown. If teardown of a namespace is difficult, it may be possible to wire the reconciler in such a way that it ignores reconcile requests that come from namespaces other than the one being tested:
type MyCoolReconciler struct {
client.Client
...
Namespace string // restrict namespaces to reconcile
}
func (r *MyCoolReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {
_ = r.Log.WithValues("myreconciler", req.NamespacedName)
// Ignore requests for other namespaces, if specified
if r.Namespace != "" && req.Namespace != r.Namespace {
return ctrl.Result{}, nil
}
Whenever your tests create a new namespace, it can modify the value of reconciler.Namespace. The reconciler will effectively ignore the previous namespace.
For further information see the issue raised in the controller-runtime controller-runtime/issues/880 to add this support.
Projects scaffolded with Kubebuilder can enable the metrics
and the cert-manager
options. Note that when we are using the ENV TEST we are looking to test the controllers and their reconciliation. It is considered an integrated test because the ENV TEST API will do the test against a cluster and because of this the binaries are downloaded and used to configure its pre-requirements, however, its purpose is mainly to unit
test the controllers.
Therefore, to test a reconciliation in common cases you do not need to care about these options. However, if you would like to do tests with the Prometheus and the Cert-manager installed you can add the required steps to install them before running the tests.
Following an example.
// Add the operations to install the Prometheus operator and the cert-manager
// before the tests.
BeforeEach(func() {
By("installing prometheus operator")
Expect(utils.InstallPrometheusOperator()).To(Succeed())
By("installing the cert-manager")
Expect(utils.InstallCertManager()).To(Succeed())
})
// You can also remove them after the tests::
AfterEach(func() {
By("uninstalling the Prometheus manager bundle")
utils.UninstallPrometheusOperManager()
By("uninstalling the cert-manager bundle")
utils.UninstallCertManager()
})
Check the following example of how you can implement the above operations:
const (
prometheusOperatorVersion = "0.51"
prometheusOperatorURL = "https://raw.githubusercontent.com/prometheus-operator/" + "prometheus-operator/release-%s/bundle.yaml"
certmanagerVersion = "v1.5.3"
certmanagerURLTmpl = "https://github.com/cert-manager/cert-manager/releases/download/%s/cert-manager.yaml"
)
func warnError(err error) {
fmt.Fprintf(GinkgoWriter, "warning: %v\n", err)
}
// InstallPrometheusOperator installs the prometheus Operator to be used to export the enabled metrics.
func InstallPrometheusOperator() error {
url := fmt.Sprintf(prometheusOperatorURL, prometheusOperatorVersion)
cmd := exec.Command("kubectl", "apply", "-f", url)
_, err := Run(cmd)
return err
}
// UninstallPrometheusOperator uninstalls the prometheus
func UninstallPrometheusOperator() {
url := fmt.Sprintf(prometheusOperatorURL, prometheusOperatorVersion)
cmd := exec.Command("kubectl", "delete", "-f", url)
if _, err := Run(cmd); err != nil {
warnError(err)
}
}
// UninstallCertManager uninstalls the cert manager
func UninstallCertManager() {
url := fmt.Sprintf(certmanagerURLTmpl, certmanagerVersion)
cmd := exec.Command("kubectl", "delete", "-f", url)
if _, err := Run(cmd); err != nil {
warnError(err)
}
}
// InstallCertManager installs the cert manager bundle.
func InstallCertManager() error {
url := fmt.Sprintf(certmanagerURLTmpl, certmanagerVersion)
cmd := exec.Command("kubectl", "apply", "-f", url)
if _, err := Run(cmd); err != nil {
return err
}
// Wait for cert-manager-webhook to be ready, which can take time if cert-manager
//was re-installed after uninstalling on a cluster.
cmd = exec.Command("kubectl", "wait", "deployment.apps/cert-manager-webhook",
"--for", "condition=Available",
"--namespace", "cert-manager",
"--timeout", "5m",
)
_, err := Run(cmd)
return err
}
// LoadImageToKindCluster loads a local docker image to the kind cluster
func LoadImageToKindClusterWithName(name string) error {
cluster := "kind"
if v, ok := os.LookupEnv("KIND_CLUSTER"); ok {
cluster = v
}
kindOptions := []string{"load", "docker-image", name, "--name", cluster}
cmd := exec.Command("kind", kindOptions...)
_, err := Run(cmd)
return err
}
However, see that tests for the metrics and cert-manager might fit better well as e2e tests and not under the tests done using ENV TEST for the controllers. You might want to give a look at the sample example implemented into Operator-SDK repository to know how you can write your e2e tests to ensure the basic workflows of your project.
Also, see that you can run the tests against a cluster where you have some configurations in place they can use the option to test using an existing cluster:
testEnv = &envtest.Environment{
UseExistingCluster: true,
}
By default, controller-runtime builds a global prometheus registry and
publishes a collection of performance metrics for each controller.
These metrics are protected by kube-rbac-proxy
by default if using kubebuilder. Kubebuilder v2.2.0+ scaffold a clusterrole which
can be found at config/rbac/auth_proxy_client_clusterrole.yaml
.
You will need to grant permissions to your Prometheus server so that it can
scrape the protected metrics. To achieve that, you can create a
clusterRoleBinding
to bind the clusterRole
to the service account that your
Prometheus server uses. If you are using kube-prometheus ,
this cluster binding already exists.
You can either run the following command, or apply the example yaml file provided below to create clusterRoleBinding
.
If using kubebuilder
<project-prefix>
is the namePrefix
field in config/default/kustomization.yaml
.
kubectl create clusterrolebinding metrics --clusterrole=<project-prefix>-metrics-reader --serviceaccount=<namespace>:<service-account-name>
You can also apply the following ClusterRoleBinding
:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: prometheus-k8s-rolebinding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: prometheus-k8s-role
subjects:
- kind: ServiceAccount
name: <prometheus-service-account>
namespace: <prometheus-service-account-namespace>
The prometheus-k8s-role
referenced here should provide the necessary permissions to allow prometheus scrape metrics from operator pods.
Follow the steps below to export the metrics using the Prometheus Operator:
Install Prometheus and Prometheus Operator.
We recommend using kube-prometheus
in production if you don’t have your own monitoring system.
If you are just experimenting, you can only install Prometheus and Prometheus Operator.
Uncomment the line - ../prometheus
in the config/default/kustomization.yaml
.
It creates the ServiceMonitor
resource which enables exporting the metrics.
# [PROMETHEUS] To enable prometheus monitor, uncomment all sections with 'PROMETHEUS'.
- ../prometheus
Note that, when you install your project in the cluster, it will create the
ServiceMonitor
to export the metrics. To check the ServiceMonitor,
run kubectl get ServiceMonitor -n <project>-system
. See an example:
$ kubectl get ServiceMonitor -n monitor-system
NAME AGE
monitor-controller-manager-metrics-monitor 2m8s
If you are using Prometheus Operator ensure that you have the required
permissions
If you are using Prometheus Operator, be aware that, by default, its RBAC
rules are only enabled for the default
and kube-system namespaces
. See its
guide to know how to configure kube-prometheus to monitor other namespaces using the .jsonnet
file .
Alternatively, you can give the Prometheus Operator permissions to monitor other namespaces using RBAC. See the Prometheus Operator
Enable RBAC rules for Prometheus pods
documentation to know how to enable the permissions on the namespace where the
ServiceMonitor
and manager exist.
Also, notice that the metrics are exported by default through port 8443
. In this way,
you are able to check the Prometheus metrics in its dashboard. To verify it, search
for the metrics exported from the namespace where the project is running
{namespace="<project>-system"}
. See an example:
If you wish to publish additional metrics from your controllers, this
can be easily achieved by using the global registry from
controller-runtime/pkg/metrics
.
One way to achieve this is to declare your collectors as global variables and then register them using init()
in the controller’s package.
For example:
import (
"github.com/prometheus/client_golang/prometheus"
"sigs.k8s.io/controller-runtime/pkg/metrics"
)
var (
goobers = prometheus.NewCounter(
prometheus.CounterOpts{
Name: "goobers_total",
Help: "Number of goobers proccessed",
},
)
gooberFailures = prometheus.NewCounter(
prometheus.CounterOpts{
Name: "goober_failures_total",
Help: "Number of failed goobers",
},
)
)
func init() {
// Register custom metrics with the global prometheus registry
metrics.Registry.MustRegister(goobers, gooberFailures)
}
You may then record metrics to those collectors from any part of your
reconcile loop. These metrics can be evaluated from anywhere in the operator code.
In order to publish metrics and view them on the Prometheus UI, the Prometheus instance would have to be configured to select the Service Monitor instance based on its labels.
Those metrics will be available for prometheus or
other openmetrics systems to scrape.
Following the metrics which are exported and provided by controller-runtime by default:
By default, the projects are scaffolded with a Makefile
. You can customize and update this file as please you. Here, you will find some helpers that can be useful.
The projects are built with Go and you have a lot of ways to do that. One of the options would be use go-delve for it:
# Run with Delve for development purposes against the configured Kubernetes cluster in ~/.kube/config
# Delve is a debugger for the Go programming language. More info: https://github.com/go-delve/delve
run-delve: generate fmt vet manifests
go build -gcflags "all=-trimpath=$(shell go env GOPATH)" -o bin/manager main.go
dlv --listen=:2345 --headless=true --api-version=2 --accept-multiclient exec ./bin/manager
The controller-gen
program (from controller-tools )
generates CRDs for kubebuilder projects, wrapped in the following make
rule:
manifests: controller-gen
$(CONTROLLER_GEN) rbac:roleName=manager-role crd webhook paths="./..." output:crd:artifacts:config=config/crd/bases
controller-gen
lets you specify what CRD API version to generate (either “v1”, the default, or “v1beta1”).
You can direct it to generate a specific version by adding crd:crdVersions={<version>}
to your CRD_OPTIONS
,
found at the top of your Makefile:
CRD_OPTIONS ?= "crd:crdVersions={v1beta1},preserveUnknownFields=false"
manifests: controller-gen
$(CONTROLLER_GEN) rbac:roleName=manager-role $(CRD_OPTIONS) webhook paths="./..." output:crd:artifacts:config=config/crd/bases
By adding make dry-run
you can get the patched manifests in the dry-run folder, unlike make depĺoy
which runs kustomize
and kubectl apply
.
To accomplish this, add the following lines to the Makefile:
dry-run: manifests
cd config/manager && $(KUSTOMIZE) edit set image controller=${IMG}
mkdir -p dry-run
$(KUSTOMIZE) build config/default > dry-run/manifests.yaml
The Project Config represents the configuration of a KubeBuilder project. All projects that are scaffolded with the CLI (KB version 3.0 and higher) will generate the PROJECT
file in the projects’ root directory. Therefore, it will store all plugins and input data used to generate the project and APIs to better enable plugins to make useful decisions when scaffolding.
Following is an example of a PROJECT config file which is the result of a project generated with two APIs using the Deploy Image Plugin .
# Code generated by tool. DO NOT EDIT.
# This file is used to track the info used to scaffold your project
# and allow the plugins properly work.
# More info: https://book.kubebuilder.io/reference/project-config.html
domain: testproject.org
layout:
- go.kubebuilder.io/v4
plugins:
deploy-image.go.kubebuilder.io/v1-alpha:
resources:
- domain: testproject.org
group: example.com
kind: Memcached
options:
containerCommand: memcached,-m=64,-o,modern,-v
containerPort: "11211"
image: memcached:1.4.36-alpine
runAsUser: "1001"
version: v1alpha1
- domain: testproject.org
group: example.com
kind: Busybox
options:
image: busybox:1.28
version: v1alpha1
projectName: project-v4-with-deploy-image
repo: sigs.k8s.io/kubebuilder/testdata/project-v4-with-deploy-image
resources:
- api:
crdVersion: v1
namespaced: true
controller: true
domain: testproject.org
group: example.com
kind: Memcached
path: sigs.k8s.io/kubebuilder/testdata/project-v4-with-deploy-image/api/v1alpha1
version: v1alpha1
webhooks:
validation: true
webhookVersion: v1
- api:
crdVersion: v1
namespaced: true
controller: true
domain: testproject.org
group: example.com
kind: Busybox
path: sigs.k8s.io/kubebuilder/testdata/project-v4-with-deploy-image/api/v1alpha1
version: v1alpha1
version: "3"
Following some examples of motivations to track the input used:
check if a plugin can or cannot be scaffolded on top of an existing plugin (i.e.) plugin compatibility while chaining multiple of them together.
what operations can or cannot be done such as verify if the layout allow API(s) for different groups to be scaffolded for the current configuration or not.
verify what data can or not be used in the CLI operations such as to ensure that WebHooks can only be created for pre-existent API(s)
Note that KubeBuilder is not only a CLI tool but can also be used as a library to allow users to create their plugins/tools, provide helpers and customizations on top of their existing projects - an example of which is Operator-SDK . SDK leverages KubeBuilder to create plugins to allow users to work with other languages and provide helpers for their users to integrate their projects with, for example, the Operator Framework solutions/OLM . You can check the plugin’s documentation to know more about creating custom plugins.
Additionally, another motivation for the PROJECT file is to help us to create a feature that allows users to easily upgrade their projects by providing helpers that automatically re-scaffold the project. By having all the required metadata regarding the APIs, their configurations and versions in the PROJECT file. For example, it can be used to automate the process of re-scaffolding while migrating between plugin versions. (More info ).
The Project config is versioned according to its layout. For further information see Versioning .
The PROJECT
version 3
layout looks like:
domain: testproject.org
layout:
- go.kubebuilder.io/v3
plugins:
declarative.go.kubebuilder.io/v1:
resources:
- domain: testproject.org
group: crew
kind: FirstMate
version: v1
projectName: example
repo: sigs.k8s.io/kubebuilder/example
resources:
- api:
crdVersion: v1
namespaced: true
controller: true
domain: testproject.org
group: crew
kind: Captain
path: sigs.k8s.io/kubebuilder/example/api/v1
version: v1
webhooks:
defaulting: true
validation: true
webhookVersion: v1
Now let’s check its layout fields definition:
Field Description
layout
Defines the global plugins, e.g. a project init
with --plugins="go/v3,declarative"
means that any sub-command used will always call its implementation for both plugins in a chain.
domain
Store the domain of the project. This information can be provided by the user when the project is generate with the init
sub-command and the domain
flag.
plugins
Defines the plugins used to do custom scaffolding, e.g. to use the optional declarative
plugin to do scaffolding for just a specific api via the command kubebuider create api [options] --plugins=declarative/v1
.
projectName
The name of the project. This will be used to scaffold the manager data. By default it is the name of the project directory, however, it can be provided by the user in the init
sub-command via the --project-name
flag.
repo
The project repository which is the Golang module, e.g github.com/example/myproject-operator
.
resources
An array of all resources which were scaffolded in the project.
resources.api
The API scaffolded in the project via the sub-command create api
.
resources.api.crdVersion
The Kubernetes API version (apiVersion
) used to do the scaffolding for the CRD resource.
resources.api.namespaced
The API RBAC permissions which can be namespaced or cluster scoped.
resources.controller
Indicates whether a controller was scaffolded for the API.
resources.domain
The domain of the resource which is provided by the --domain
flag when the sub-command create api
is used.
resources.group
The GKV group of the resource which is provided by the --group
flag when the sub-command create api
is used.
resources.version
The GKV version of the resource which is provided by the --version
flag when the sub-command create api
is used.
resources.kind
Store GKV Kind of the resource which is provided by the --kind
flag when the sub-command create api
is used.
resources.path
The import path for the API resource. It will be <repo>/api/<kind>
unless the API added to the project is an external or core-type. For the core-types scenarios, the paths used are mapped here .
resources.webhooks
Store the webhooks data when the sub-command create webhook
is used.
resources.webhooks.webhookVersion
The Kubernetes API version (apiVersion
) used to scaffold the webhook resource.
resources.webhooks.conversion
It is true
when the webhook was scaffold with the --conversion
flag which means that is a conversion webhook.
resources.webhooks.defaulting
It is true
when the webhook was scaffold with the --defaulting
flag which means that is a defaulting webhook.
resources.webhooks.validation
It is true
when the webhook was scaffold with the --programmatic-validation
flag which means that is a validation webhook.
Since the 3.0.0
Kubebuilder version, preliminary support for plugins was added. You can Extend the CLI and Scaffolds as well. See that when users run the CLI commands to perform the scaffolds, the plugins are used:
To initialize a project with a chain of global plugins:
kubebuilder init --plugins=pluginA,pluginB
To perform an optional scaffold using custom plugins:
kubebuilder create api --plugins=pluginA,pluginB
This section details how to extend Kubebuilder and create your plugins following the same layout structures.
This section describes the plugins supported and shipped in with the Kubebuilder project.
The following plugins are useful to scaffold the whole project with the tool.
The following plugins are useful to generate code and take advantage of optional features
Then, see that you can use the kustomize plugin, which is responsible for to scaffold the kustomize files under config/
, as
the base language plugins which are responsible for to scaffold the Golang files to create your own plugins to work with
another languages (i.e. Operator-SDK does to allow users work with Ansible/Helm) or to add
helpers on top, such as Operator-SDK does to add their features to integrate the projects with OLM .
Plugin Key Description
kustomize.common.kubebuilder.io/v1 kustomize/v1 (Deprecated) Responsible for scaffolding all manifests to configure projects with kustomize(v3) . (create and update the config/
directory). This plugin is used in the composition to create the plugin (go/v3
).
kustomize.common.kubebuilder.io/v2 kustomize/v2
It has the same purpose of kustomize/v1
. However, it works with kustomize version v4
and addresses the required changes for future kustomize configurations. It will probably be used with the future go/v4-alpha
plugin.
base.go.kubebuilder.io/v3
base/v3
Responsible for scaffolding all files that specifically require Golang. This plugin is used in composition to create the plugin (go/v3
)
base.go.kubebuilder.io/v4
base/v4
Responsible for scaffolding all files which specifically requires Golang. This plugin is used in the composition to create the plugin (go/v4
)
The following plugins are useful to scaffold the whole project with the tool.
The go/v2
plugin cannot scaffold projects in which CRDs and/or Webhooks have a v1
API version.
The go/v2
plugin scaffolds with the v1beta1
API version which was deprecated in Kubernetes 1.16
and removed in 1.22
.
This plugin was kept to ensure backwards compatibility with projects that were scaffolded with the old "Kubebuilder 2.x"
layout and does not work with the new plugin ecosystem that was introduced with Kubebuilder 3.0.0
More info
Since 28 Apr 2021
, the default layout produced by Kubebuilder changed and is done via the go/v3
.
We encourage you migrate your project to the latest version if your project was built with a Kubebuilder
versions < 3.0.0
.
The recommended way to migrate a v2
project is to create a new v3
project and copy over the API
and the reconciliation code. The conversion will end up with a project that looks like a native v3
project.
For further information check the Migration guide
The go/v2
plugin has the purpose to scaffold Golang projects to help users
to build projects with controllers and keep the backwards compatibility
with the default scaffold made using Kubebuilder CLI 2.x.z
releases.
You can check samples using this plugin by looking at the project-v2-<options>
directories under the testdata projects on the root directory of the Kubebuilder project.
Only if you are looking to scaffold a project with the legacy layout. Otherwise, it is recommended you to use the default Golang version plugin.
Be aware that this plugin version does not provide a scaffold compatible with the latest versions of the dependencies used in order to keep its backwards compatibility.
To initialize a Golang project using the legacy layout and with this plugin run, e.g.:
kubebuilder init --domain tutorial.kubebuilder.io --repo tutorial.kubebuilder.io/project --plugins=go/v2
By creating a project with this plugin, the PROJECT
file scaffold will be using the previous
schema (project version 2 ), so that Kubebuilder CLI knows what plugin version was used and will
call its subcommands such as create api
and create webhooks
.
Note that further Golang plugins versions use the new Project file schema, which tracks the
information about what plugins and versions have been used so far.
Init - kubebuilder init [OPTIONS]
Edit - kubebuilder edit [OPTIONS]
Create API - kubebuilder create api [OPTIONS]
Create Webhook - kubebuilder create webhook [OPTIONS]
The go/v3
cannot fully support Kubernetes 1.25+ and work with Kustomize versions > v3.
The recommended way to migrate a v3
project is to create a new v4
project and copy over the API
and the reconciliation code. The conversion will end up with a project that looks like a native v4
project.
For further information check the Migration guide
Kubebuilder tool will scaffold the go/v3 plugin by default. This plugin is a composition of the plugins kustomize.common.kubebuilder.io/v1
and base.go.kubebuilder.io/v3
. By using you can scaffold the default project which is a helper to construct sets of controllers .
It basically scaffolds all the boilerplate code required to create and design controllers. Note that by following the quickstart you will be using this plugin.
Samples are provided under the testdata directory of the Kubebuilder project. You can check samples using this plugin by looking at the project-v3-<options>
projects under the testdata directory on the root directory of the Kubebuilder project.
If you are looking to scaffold Golang projects to develop projects using controllers
As go/v3
is the default plugin there is no need to explicitly mention to Kubebuilder to use this plugin.
To create a new project with the go/v3
plugin the following command can be used:
kubebuilder init --plugins=`go/v3` --domain tutorial.kubebuilder.io --repo tutorial.kubebuilder.io/project
All the other subcommands supported by the go/v3 plugin can be executed similarly.
Also, if you need you can explicitly inform the plugin via the option provided --plugins=go/v3
.
Init - kubebuilder init [OPTIONS]
Edit - kubebuilder edit [OPTIONS]
Create API - kubebuilder create api [OPTIONS]
Create Webhook - kubebuilder create webhook [OPTIONS]
Kubebuilder will scaffold using the go/v4
plugin only if specified when initializing the project.
This plugin is a composition of the plugins kustomize.common.kubebuilder.io/v2
and base.go.kubebuilder.io/v4
.
It scaffolds a project template that helps in constructing sets of controllers .
It scaffolds boilerplate code to create and design controllers.
Note that by following the quickstart you will be using this plugin.
You can check samples using this plugin by looking at the project-v4-<options>
projects
under the testdata directory on the root directory of the Kubebuilder project.
If you are looking to scaffold Golang projects to develop projects using controllers
If you have a project created with go/v3
(default layout since 28 Apr 2021
and Kubebuilder release version 3.0.0
) to go/v4
then,
see the migration guide Migration from go/v3 to go/v4
To create a new project with the go/v4
plugin the following command can be used:
kubebuilder init --domain tutorial.kubebuilder.io --repo tutorial.kubebuilder.io/project --plugins=go/v4
Init - kubebuilder init [OPTIONS]
Edit - kubebuilder edit [OPTIONS]
Create API - kubebuilder create api [OPTIONS]
Create Webhook - kubebuilder create webhook [OPTIONS]
The following plugins are useful to generate code and take advantage of optional features
Plugin Key Description
declarative.go.kubebuilder.io/v1 - (Deprecated) declarative/v1
Optional plugin used to scaffold APIs/controllers using the [kubebuilder-declarative-pattern][kubebuilder-declarative-pattern] project.
grafana.kubebuilder.io/v1-alpha grafana/v1-alpha
Optional helper plugin which can be used to scaffold Grafana Manifests Dashboards for the default metrics which are exported by controller-runtime.
deploy-image.go.kubebuilder.io/v1-alpha deploy-image/v1-alpha
Optional helper plugin which can be used to scaffold APIs and controller with code implementation to Deploy and Manage an Operand(image).
The Declarative plugin is an implementation derived from the kubebuilder-declarative-pattern project.
As the project maintainers possess the most comprehensive knowledge about its changes and Kubebuilder allows
the creation of custom plugins using its library, it has been decided that this plugin will be better
maintained within the kubebuilder-declarative-pattern project itself,
which falls under its domain of responsibility. This decision aims to improve the maintainability of both the
plugin and Kubebuilder, ultimately providing an enhanced user experience. To follow up on this work, please refer
to Issue #293 in the
kubebuilder-declarative-pattern repository.
The declarative plugin allows you to create controllers using the kubebuilder-declarative-pattern .
By using the declarative plugin, you can make the required changes on top of what is scaffolded by default when you create a Go project with Kubebuilder and the Golang plugins (i.e. go/v2, go/v3).
You can check samples using this plugin by looking at the “addon” samples inside the testdata directory of the Kubebuilder project.
If you are looking to scaffold one or more controllers following the pattern ( See an e.g. of the reconcile method implemented here )
If you want to have manifests shipped inside your Manager container. The declarative plugin works with channels, which allow you to push manifests. More info
The declarative plugin requires to be used with one of the available Golang plugins
If you want that any API(s) and its respective controller(s) generate to reconcile them of your project adopt this partner then:
kubebuilder init --plugins=go/v3,declarative/v1 --domain example.org --repo example.org/guestbook-operator
If you want to adopt this pattern for specific API(s) and its respective controller(s) (not for any API/controller scaffold using Kubebuilder CLI) then:
kubebuilder create api --plugins=go/v3,declarative/v1 --version v1 --kind Guestbook
The declarative plugin implements the following subcommands:
init ($ kubebuilder init [OPTIONS]
)
create api ($ kubebuilder create api [OPTIONS]
)
The following scaffolds will be created or updated by this plugin:
controllers/*_controller.go
api/*_types.go
channels/packages/<packagename>/<version>/manifest.yaml
channels/stable
Dockerfile
The Grafana plugin is an optional plugin that can be used to scaffold Grafana Dashboards to allow you to check out the default metrics which are exported by projects using controller-runtime .
You can check its default scaffold by looking at the project-v3-with-metrics
projects under the testdata directory on the root directory of the Kubebuilder project.
Your project must be using controller-runtime to expose the metrics via the controller default metrics and they need to be collected by Prometheus.
Access to Prometheus .
Prometheus should have an endpoint exposed. (For prometheus-operator
, this is similar as: http://prometheus-k8s.monitoring.svc:9090 )
The endpoint is ready to/already become the datasource of your Grafana. See Add a data source
Access to Grafana . Make sure you have:
Check the metrics to know how to enable the metrics for your projects scaffold with Kubebuilder.
See that in the config/prometheus you will find the ServiceMonitor to enable the metrics in the default endpoint /metrics
.
The Grafana plugin is attached to the init
subcommand and the edit
subcommand:
# Initialize a new project with grafana plugin
kubebuilder init --plugins grafana.kubebuilder.io/v1-alpha
# Enable grafana plugin to an existing project
kubebuilder edit --plugins grafana.kubebuilder.io/v1-alpha
The plugin will create a new directory and scaffold the JSON files under it (i.e. grafana/controller-runtime-metrics.json
).
See an example of how to use the plugin in your project:
Copy the JSON file
Visit <your-grafana-url>/dashboard/import
to import a new dashboard .
Paste the JSON content to Import via panel json
, then press Load
button
Select the data source for Prometheus metrics
Once the json is imported in Grafana, the dashboard is ready.
Metrics:
controller_runtime_reconcile_total
controller_runtime_reconcile_errors_total
Query:
sum(rate(controller_runtime_reconcile_total{job=“$job”}[5m])) by (instance, pod)
sum(rate(controller_runtime_reconcile_errors_total{job=“$job”}[5m])) by (instance, pod)
Description:
Per-second rate of total reconciliation as measured over the last 5 minutes
Per-second rate of reconciliation errors as measured over the last 5 minutes
Sample:
Metrics:
process_cpu_seconds_total
process_resident_memory_bytes
Query:
rate(process_cpu_seconds_total{job=“$job”, namespace=“$namespace”, pod=“$pod”}[5m]) * 100
process_resident_memory_bytes{job=“$job”, namespace=“$namespace”, pod=“$pod”}
Description:
Per-second rate of CPU usage as measured over the last 5 minutes
Allocated Memory for the running controller
Sample:
Metrics
workqueue_queue_duration_seconds_bucket
Query:
histogram_quantile(0.50, sum(rate(workqueue_queue_duration_seconds_bucket{job=“$job”, namespace=“$namespace”}[5m])) by (instance, name, le))
Description
Seconds an item stays in workqueue before being requested.
Sample:
Metrics
workqueue_work_duration_seconds_bucket
Query:
histogram_quantile(0.50, sum(rate(workqueue_work_duration_seconds_bucket{job=“$job”, namespace=“$namespace”}[5m])) by (instance, name, le))
Description
Seconds of processing an item from workqueue takes.
Sample:
Metrics
Query:
sum(rate(workqueue_adds_total{job=“$job”, namespace=“$namespace”}[5m])) by (instance, name)
Description
Per-second rate of items added to work queue
Sample:
Metrics
Query:
sum(rate(workqueue_retries_total{job=“$job”, namespace=“$namespace”}[5m])) by (instance, name)
Description
Per-second rate of retries handled by workqueue
Sample:
Metrics
controller_runtime_active_workers
Query:
controller_runtime_active_workers{job=“$job”, namespace=“$namespace”}
Description
The number of active controller workers
Sample:
Metrics
Query:
workqueue_depth{job=“$job”, namespace=“$namespace”}
Description
Current depth of workqueue
Sample:
Metrics
workqueue_unfinished_work_seconds
Query:
rate(workqueue_unfinished_work_seconds{job=“$job”, namespace=“$namespace”}[5m])
Description
How many seconds of work has done that is in progress and hasn’t been observed by work_duration.
Sample:
The Grafana plugin supports scaffolding manifests for custom metrics.
When the plugin is triggered for the first time, grafana/custom-metrics/config.yaml
is generated.
---
customMetrics:
# - metric: # Raw custom metric (required)
# type: # Metric type: counter/gauge/histogram (required)
# expr: # Prom_ql for the metric (optional)
# unit: # Unit of measurement, examples: s,none,bytes,percent,etc. (optional)
You can enter multiple custom metrics in the file. For each element, you need to specify the metric
and its type
.
The Grafana plugin can automatically generate expr
for visualization.
Alternatively, you can provide expr
and the plugin will use the specified one directly.
---
customMetrics:
- metric: memcached_operator_reconcile_total # Raw custom metric (required)
type: counter # Metric type: counter/gauge/histogram (required)
unit: none
- metric: memcached_operator_reconcile_time_seconds_bucket
type: histogram
Once config.yaml
is configured, you can run kubebuilder edit --plugins grafana.kubebuilder.io/v1-alpha
again.
This time, the plugin will generate grafana/custom-metrics/custom-metrics-dashboard.json
, which can be imported to Grafana UI.
See an example of how to visualize your custom metrics:
The Grafana plugin implements the following subcommands:
The following scaffolds will be created or updated by this plugin:
The deploy-image plugin allows users to create controllers and custom resources which will deploy and manage an image on the cluster following
the guidelines and best practices. It abstracts the complexities to achieve this goal while allowing users to improve and customize their projects.
By using this plugin you will have:
a controller implementation to Deploy and manage an Operand(image) on the cluster
tests to check the reconciliation implemented using ENVTEST
the custom resources samples updated with the specs used
you will check that the Operand(image) will be added on the manager via environment variables
See the “project-v3-with-deploy-image” directory under the testdata directory of the Kubebuilder project to check an example of a scaffolding created using this plugin.
This plugin is helpful for those who are getting started.
If you are looking to Deploy and Manage an image (Operand) using Operator pattern and the tool the plugin will create an API/controller to be reconciled to achieve this goal
If you are looking to speed up
After you create a new project with kubebuilder init
you can create APIs using this plugin. Ensure that you have followed the quick start before trying to use it.
Then, by using this plugin you can create APIs informing the image (Operand) that you would like to deploy on the cluster. Note that you can optionally specify the command that could be used to initialize this container via the flag --image-container-command
and the port with --image-container-port
flag. You can also specify the RunAsUser
value for the Security Context of the container via the flag --run-as-user
., i.e:
kubebuilder create api --group example.com --version v1alpha1 --kind Memcached --image=memcached:1.6.15-alpine --image-container-command="memcached,-m=64,modern,-v" --image-container-port="11211" --run-as-user="1001" --plugins="deploy-image/v1-alpha"
The make run
will execute the main.go
outside of the cluster to let you test the project running it locally. Note that by using this plugin the Operand image informed will be stored via an environment variable in the config/manager/manager.yaml
manifest.
Therefore, before run make run
you need to export any environment variable that you might have. Example:
export MEMCACHED_IMAGE="memcached:1.4.36-alpine"
The deploy-image plugin implements the following subcommands:
create api ($ kubebuilder create api [OPTIONS]
)
With the create api
command of this plugin, in addition to the existing scaffolding, the following files are affected:
controllers/*_controller.go
(scaffold controller with reconciliation implemented)
controllers/*_controller_test.go
(scaffold the tests for the controller)
controllers/*_suite_test.go
(scaffold/update the suite of tests)
api/<version>/*_types.go
(scaffold the specs for the new api)
config/samples/*_.yaml
(scaffold default values for its CR)
main.go
(update to add controller setup)
config/manager/manager.yaml
(update with envvar to store the image)
Then, see that you can use the kustomize plugin, which is responsible for to scaffold the kustomize files under config/
, as
the base language plugins which are responsible for to scaffold the Golang files to create your own plugins to work with
another languages (i.e. Operator-SDK does to allow users work with Ansible/Helm) or to add
helpers on top, such as Operator-SDK does to add their features to integrate the projects with OLM .
Plugin Key Description
kustomize.common.kubebuilder.io/v1 kustomize/v1 (Deprecated) Responsible for scaffolding all manifests to configure projects with kustomize(v3) . (create and update the config/
directory). This plugin is used in the composition to create the plugin (go/v3
).
kustomize.common.kubebuilder.io/v2 kustomize/v2
It has the same purpose of kustomize/v1
. However, it works with kustomize version v4
and addresses the required changes for future kustomize configurations. It will probably be used with the future go/v4-alpha
plugin.
base.go.kubebuilder.io/v3
base/v3
Responsible for scaffolding all files that specifically require Golang. This plugin is used in composition to create the plugin (go/v3
)
base.go.kubebuilder.io/v4
base/v4
Responsible for scaffolding all files which specifically requires Golang. This plugin is used in the composition to create the plugin (go/v4
)
The kustomize/v1 plugin is deprecated. If you are using this plugin, it is recommended
to migrate to the kustomize/v2 plugin which uses Kustomize v5 and provides support for
Apple Silicon (M1).
If you are using Golang projects scaffolded with go/v3
which uses this version please, check
the Migration guide to learn how to upgrade your projects.
The kustomize plugin allows you to scaffold all kustomize manifests used to work with the language plugins such as go/v2
and go/v3
.
By using the kustomize plugin, you can create your own language plugins and ensure that you will have the same configurations
and features provided by it.
This plugin uses kubernetes-sigs/kustomize v3 and the architectures supported are:
linux/amd64
linux/arm64
darwin/amd64
You might want to consider using kustomize/v2 if you are looking to scaffold projects in
other architecture environments. (i.e. if you are looking to scaffold projects with Apple Silicon/M1 (darwin/arm64
) this plugin
will not work, more info: kubernetes-sigs/kustomize#4612 ).
Note that projects such as Operator-sdk consume the Kubebuilder project as a lib and provide options to work with other languages
like Ansible and Helm. The kustomize plugin allows them to easily keep a maintained configuration and ensure that all languages have
the same configuration. It is also helpful if you are looking to provide nice plugins which will perform changes on top of
what is scaffolded by default. With this approach we do not need to keep manually updating this configuration in all possible language plugins
which uses the same and we are also
able to create “helper” plugins which can work with many projects and languages.
You can check the kustomize content by looking at the config/
directory. Samples are provided under the testdata
directory of the Kubebuilder project.
If you are looking to scaffold the kustomize configuration manifests for your own language plugin
If you are looking to define that your language plugin should use kustomize use the Bundle Plugin
to specify that your language plugin is a composition with your plugin responsible for scaffold
all that is language specific and kustomize for its configuration, see:
// Bundle plugin which built the golang projects scaffold by Kubebuilder go/v3
// The follow code is creating a new plugin with its name and version via composition
// You can define that one plugin is composite by 1 or Many others plugins
gov3Bundle, _ := plugin.NewBundle(plugin.WithName(golang.DefaultNameQualifier),
plugin.WithVersion(plugin.Version{Number: 3}),
plugin.WithPlugins(kustomizecommonv1.Plugin{}, golangv3.Plugin{}), // scaffold the config/ directory and all kustomize files
// Scaffold the Golang files and all that specific for the language e.g. go.mod, apis, controllers
)
Also, with Kubebuilder, you can use kustomize alone via:
kubebuilder init --plugins=kustomize/v1
$ ls -la
total 24
drwxr-xr-x 6 camilamacedo86 staff 192 31 Mar 09:56 .
drwxr-xr-x 11 camilamacedo86 staff 352 29 Mar 21:23 ..
-rw------- 1 camilamacedo86 staff 129 26 Mar 12:01 .dockerignore
-rw------- 1 camilamacedo86 staff 367 26 Mar 12:01 .gitignore
-rw------- 1 camilamacedo86 staff 94 31 Mar 09:56 PROJECT
drwx------ 6 camilamacedo86 staff 192 31 Mar 09:56 config
Or combined with the base language plugins:
# Provides the same scaffold of go/v3 plugin which is a composition (kubebuilder init --plugins=go/v3)
kubebuilder init --plugins=kustomize/v1,base.go.kubebuilder.io/v3 --domain example.org --repo example.org/guestbook-operator
The kustomize plugin implements the following subcommands:
init ($ kubebuilder init [OPTIONS]
)
create api ($ kubebuilder create api [OPTIONS]
)
create webhook ($ kubebuilder create api [OPTIONS]
)
Its implementation for the subcommand create api will scaffold the kustomize manifests
which are specific for each API, see here . The same applies
to its implementation for create webhook.
The following scaffolds will be created or updated by this plugin:
The kustomize plugin allows you to scaffold all kustomize manifests used to work with the language base plugin base.go.kubebuilder.io/v4
.
This plugin is used to generate the manifest under config/
directory for the projects build within the go/v4 plugin (default scaffold).
Note that projects such as Operator-sdk consume the Kubebuilder project as a lib and provide options to work with other languages
like Ansible and Helm. The kustomize plugin allows them to easily keep a maintained configuration and ensure that all languages have
the same configuration. It is also helpful if you are looking to provide nice plugins which will perform changes on top of
what is scaffolded by default. With this approach we do not need to keep manually updating this configuration in all possible language plugins
which uses the same and we are also
able to create “helper” plugins which can work with many projects and languages.
You can check the kustomize content by looking at the config/
directory provide on the sample project-v4-*
under the testdata
directory of the Kubebuilder project.
If you are looking to scaffold the kustomize configuration manifests for your own language plugin
If you are looking for support on Apple Silicon (darwin/arm64
). (Before kustomize 4.x
the binary for this plataform is not provided )
If you are looking for to begin to try out the new syntax and features provide by kustomize v4 (More info) and v5 (More info)
If you are NOT looking to build projects which will be used on Kubernetes cluster versions < 1.22
(The new features provides by kustomize v4 are not officially supported and might not work with kubectl < 1.22
)
If you are NOT looking to rely on special URLs in resource fields
If you want to use replacements since vars are deprecated and might be removed soon
If you are looking to define that your language plugin should use kustomize use the Bundle Plugin
to specify that your language plugin is a composition with your plugin responsible for scaffold
all that is language specific and kustomize for its configuration, see:
import (
...
kustomizecommonv2alpha "sigs.k8s.io/kubebuilder/v3/pkg/plugins/common/kustomize/v2"
golangv4 "sigs.k8s.io/kubebuilder/v3/pkg/plugins/golang/v4"
...
)
// Bundle plugin which built the golang projects scaffold by Kubebuilder go/v3
// The follow code is creating a new plugin with its name and version via composition
// You can define that one plugin is composite by 1 or Many others plugins
gov3Bundle, _ := plugin.NewBundle(plugin.WithName(golang.DefaultNameQualifier),
plugin.WithVersion(plugin.Version{Number: 3}),
plugin.WithPlugins(kustomizecommonv2.Plugin{}, golangv3.Plugin{}), // scaffold the config/ directory and all kustomize files
// Scaffold the Golang files and all that specific for the language e.g. go.mod, apis, controllers
)
Also, with Kubebuilder, you can use kustomize/v2 alone via:
kubebuilder init --plugins=kustomize/v2
$ ls -la
total 24
drwxr-xr-x 6 camilamacedo86 staff 192 31 Mar 09:56 .
drwxr-xr-x 11 camilamacedo86 staff 352 29 Mar 21:23 ..
-rw------- 1 camilamacedo86 staff 129 26 Mar 12:01 .dockerignore
-rw------- 1 camilamacedo86 staff 367 26 Mar 12:01 .gitignore
-rw------- 1 camilamacedo86 staff 94 31 Mar 09:56 PROJECT
drwx------ 6 camilamacedo86 staff 192 31 Mar 09:56 config
Or combined with the base language plugins:
# Provides the same scaffold of go/v3 plugin which is composition but with kustomize/v2
kubebuilder init --plugins=kustomize/v2,base.go.kubebuilder.io/v4 --domain example.org --repo example.org/guestbook-operator
The kustomize plugin implements the following subcommands:
init ($ kubebuilder init [OPTIONS]
)
create api ($ kubebuilder create api [OPTIONS]
)
create webhook ($ kubebuilder create api [OPTIONS]
)
Its implementation for the subcommand create api will scaffold the kustomize manifests
which are specific for each API, see here . The same applies
to its implementation for create webhook.
The following scaffolds will be created or updated by this plugin:
You can extend Kubebuilder to allow your project to have the same CLI features and provide the plugins scaffolds.
Plugins are run using a CLI
object, which maps a plugin type to a subcommand and calls that plugin’s methods.
For example, writing a program that injects an Init
plugin into a CLI
then calling CLI.Run()
will call the
plugin’s SubcommandMetadata , UpdatesMetadata and Run
methods with information a user has passed to the
program in kubebuilder init
. Following an example:
package cli
import (
log "github.com/sirupsen/logrus"
"github.com/spf13/cobra"
"sigs.k8s.io/kubebuilder/v3/pkg/cli"
cfgv3 "sigs.k8s.io/kubebuilder/v3/pkg/config/v3"
"sigs.k8s.io/kubebuilder/v3/pkg/plugin"
kustomizecommonv1 "sigs.k8s.io/kubebuilder/v3/pkg/plugins/common/kustomize/v1"
"sigs.k8s.io/kubebuilder/v3/pkg/plugins/golang"
declarativev1 "sigs.k8s.io/kubebuilder/v3/pkg/plugins/golang/declarative/v1"
golangv3 "sigs.k8s.io/kubebuilder/v3/pkg/plugins/golang/v3"
)
var (
// The following is an example of the commands
// that you might have in your own binary
commands = []*cobra.Command{
myExampleCommand.NewCmd(),
}
alphaCommands = []*cobra.Command{
myExampleAlphaCommand.NewCmd(),
}
)
// GetPluginsCLI returns the plugins based CLI configured to be used in your CLI binary
func GetPluginsCLI() (*cli.CLI) {
// Bundle plugin which built the golang projects scaffold by Kubebuilder go/v3
gov3Bundle, _ := plugin.NewBundleWithOptions(plugin.WithName(golang.DefaultNameQualifier),
plugin.WithVersion(plugin.Version{Number: 3}),
plugin.WithPlugins(kustomizecommonv1.Plugin{}, golangv3.Plugin{}),
)
c, err := cli.New(
// Add the name of your CLI binary
cli.WithCommandName("example-cli"),
// Add the version of your CLI binary
cli.WithVersion(versionString()),
// Register the plugins options which can be used to do the scaffolds via your CLI tool. See that we are using as example here the plugins which are implemented and provided by Kubebuilder
cli.WithPlugins(
gov3Bundle,
&declarativev1.Plugin{},
),
// Defines what will be the default plugin used by your binary. It means that will be the plugin used if no info be provided such as when the user runs `kubebuilder init`
cli.WithDefaultPlugins(cfgv3.Version, gov3Bundle),
// Define the default project configuration version which will be used by the CLI when none is informed by --project-version flag.
cli.WithDefaultProjectVersion(cfgv3.Version),
// Adds your own commands to the CLI
cli.WithExtraCommands(commands...),
// Add your own alpha commands to the CLI
cli.WithExtraAlphaCommands(alphaCommands...),
// Adds the completion option for your CLI
cli.WithCompletion(),
)
if err != nil {
log.Fatal(err)
}
return c
}
// versionString returns the CLI version
func versionString() string {
// return your binary project version
}
This program can then be built and run in the following ways:
Default behavior:
# Initialize a project with the default Init plugin, "go.example.com/v1".
# This key is automatically written to a PROJECT config file.
$ my-bin-builder init
# Create an API and webhook with "go.example.com/v1" CreateAPI and
# CreateWebhook plugin methods. This key was read from the config file.
$ my-bin-builder create api [flags]
$ my-bin-builder create webhook [flags]
Selecting a plugin using --plugins
:
# Initialize a project with the "ansible.example.com/v1" Init plugin.
# Like above, this key is written to a config file.
$ my-bin-builder init --plugins ansible
# Create an API and webhook with "ansible.example.com/v1" CreateAPI
# and CreateWebhook plugin methods. This key was read from the config file.
$ my-bin-builder create api [flags]
$ my-bin-builder create webhook [flags]
The CLI is responsible for managing the PROJECT file config , representing the configuration of the projects that are scaffold by the CLI tool.
Kubebuilder provides scaffolding options via plugins. Plugins are responsible for implementing the code that will be executed when the sub-commands are called. You can create a new plugin by implementing the Plugin interface .
On top of being a Base
, a plugin should also implement the SubcommandMetadata
interface so it can be run with a CLI. It optionally to set custom help text for the target command; this method can be a no-op, which will preserve the default help text set by the cobra command constructors.
Kubebuilder CLI plugins wrap scaffolding and CLI features in conveniently packaged Go types that are executed by the
kubebuilder
binary, or any binary which imports them. More specifically, a plugin configures the execution of one
of the following CLI commands:
init
: project initialization.
create api
: scaffold Kubernetes API definitions.
create webhook
: scaffold Kubernetes webhooks.
Plugins are identified by a key of the form <name>/<version>
. There are two ways to specify a plugin to run:
Setting kubebuilder init --plugins=<plugin key>
, which will initialize a project configured for plugin with key
<plugin key>
.
A layout: <plugin key>
in the scaffolded PROJECT configuration file . Commands (except for init
, which scaffolds this file) will look at this value before running to choose which plugin to run.
By default, <plugin key>
will be go.kubebuilder.io/vX
, where X
is some integer.
For a full implementation example, check out Kubebuilder’s native go.kubebuilder.io
plugin.
Plugin names must be DNS1123 labels and should be fully qualified, i.e. they have a suffix like
.example.com
. For example, the base Go scaffold used with kubebuilder
commands has name go.kubebuilder.io
.
Qualified names prevent conflicts between plugin names; both go.kubebuilder.io
and go.example.com
can both scaffold
Go code and can be specified by a user.
A plugin’s Version()
method returns a plugin.Version
object containing an integer value
and optionally a stage string of either “alpha” or “beta”. The integer denotes the current version of a plugin.
Two different integer values between versions of plugins indicate that the two plugins are incompatible. The stage
string denotes plugin stability:
alpha
: should be used for plugins that are frequently changed and may break between uses.
beta
: should be used for plugins that are only changed in minor ways, ex. bug fixes.
Any change that will break a project scaffolded by the previous plugin version is a breaking change.
Once a plugin is deprecated, have it implement a Deprecated interface so a deprecation warning will be printed when it is used.
Bundle Plugins allow you to create a plugin that is a composition of many plugins:
// see that will be like myplugin.example/v1`
myPluginBundle, _ := plugin.NewBundle(plugin.WithName(`<plugin-name>`),
plugin.WithVersion(`<plugin-version>`),
plugin.WithPlugins(pluginA.Plugin{}, pluginB.Plugin{}, pluginC.Plugin{}),
)
Note that it means that when a user of your CLI calls this plugin, the execution of the sub-commands will be sorted by the order to which they were added in a chain:
sub-command
of plugin A ➔ sub-command
of plugin B ➔ sub-command
of plugin C
Then, to initialize using this “Plugin Bundle” which will run the chain of plugins:
kubebuider init --plugins=myplugin.example/v1
Runs init sub-command
of the plugin A
And then, runs init sub-command
of the plugin B
And then, runs init sub-command
of the plugin C
Extending Kubebuilder can be accomplished in two primary ways:
By re-using the existing plugins
: In this approach, you use Kubebuilder as a library.
This enables you to import existing Kubebuilder plugins and extend them, leveraging their features to build upon.
It is particularly useful if you want to add functionalities that are closely tied with the existing Kubebuilder features.
By Creating an External Plugin
: This method allows you to create an independent, standalone plugin as a binary.
The plugin can be written in any language and should implement an execution pattern that Kubebuilder knows how to interact with.
You can see Creating external plugins for more info.
You can extend the Kubebuilder API to create your own plugins. If extending the CLI , your plugin will be implemented in your project and registered to the CLI as has been done by the SDK project. See its CLI code as an example.
If you are looking to create plugins which support and work with another language.
If you would like to create helpers and integrations on top of the scaffolds done by the plugins provided by Kubebuiler.
If you would like to have customized layouts according to your needs.
Kubebuilder provides a set of plugins to scaffold the projects, to help you extend and re-use its implementation to provide additional features.
For further information see Available Plugins .
Therefore, if you have a need you might want to propose a solution by adding a new plugin
which would be shipped with Kubebuilder by default.
However, you might also want to have your own tool to address your specific scenarios and by taking advantage of what is provided by Kubebuilder as a library.
That way, you can focus on addressing your needs and keep your solutions easier to maintain.
Note that by using Kubebuilder as a library, you can import its plugins and then create your own plugins that do customizations on top.
For instance, Operator-SDK
does with the plugins manifest and scorecard to add its features.
Also see here .
Another option implemented with the Extensible CLI and Scaffolding Plugins - Phase 2 is
to extend Kibebuilder as a LIB to create only a specific plugin that can be called and used with
Kubebuilder as well.
Kubebuilder offers the Golang-based operator plugins, which will help its CLI tool users create projects following the Operator Pattern .
The SDK project, for example, has language plugins for Ansible and Helm , which are similar options but for users who would like to work with these respective languages and stacks instead of Golang.
Note that Kubebuilder provides the kustomize.common.kubebuilder.io
to help in these efforts. This plugin will scaffold the common base without any specific language scaffold file to allow you to extend the Kubebuilder style for your plugins.
In this way, currently, you can Extend the CLI and use the Bundle Plugin
to create your language plugins such as:
mylanguagev1Bundle, _ := plugin.NewBundle(plugin.WithName(language.DefaultNameQualifier),
plugin.WithVersion(plugin.Version{Number: 1}),
plugin.WithPlugins(kustomizecommonv1.Plugin{}, mylanguagev1.Plugin{}), // extend the common base from Kubebuilder
// your plugin language which will do the scaffolds for the specific language on top of the common base
)
If you do not want to develop your plugin using Golang, you can follow its standard by using the binary as follows:
kubebuilder init --plugins=kustomize
Then you can, for example, create your implementations for the sub-commands create api
and create webhook
using your language of preference.
Kubebuilder and SDK are both broadly adopted projects which leverage the controller-runtime project. They both allow users to build solutions using the Operator Pattern and follow common standards.
Adopting these standards can bring significant benefits, such as joining forces on maintaining the common standards as the features provided by Kubebuilder and take advantage of the contributions made by the community. This allows you to focus on the specific needs and requirements for your plugin and use-case.
And then, you will also be able to use custom plugins and options currently or in the future which might to be provided by these projects as any other which decides to persuade the same standards.
Note that users are also able to use plugins to customize their scaffolds and address specific needs.
See that Kubebuilder provides the deploy-image
plugin that allows the user to create the controller & CRs which will deploy and manage an image on the cluster:
kubebuilder create api --group example.com --version v1alpha1 --kind Memcached --image=memcached:1.6.15-alpine --image-container-command="memcached,-m=64,modern,-v" --image-container-port="11211" --run-as-user="1001" --plugins="deploy-image/v1-alpha"
This plugin will perform a custom scaffold following the Operator Pattern .
Another example is the grafana
plugin that scaffolds a new folder container manifests to visualize operator status on Grafana Web UI:
kubebuilder edit --plugins="grafana.kubebuilder.io/v1-alpha"
In this way, by Extending the Kubebuilder CLI , you can also create custom plugins such this one.
Feel free to check the implementation under:
Your plugin may add code on top of what is scaffolded by default with Kubebuilder sub-commands(init
, create
, …).
This is common as you may expect your plugin to:
Create API
Update controller manager logic
Generate corresponding manifests
The Kubebuilder internal plugins use boilerplates to generate the files of code.
For instance, the go/v3 scaffolds the main.go
file by defining an object that implements the machinery interface .
In the implementation of Template.SetTemplateDefaults
, the raw template is set to the body.
Such object that implements the machinery interface will later pass to the execution of scaffold .
Similar, you may also design your code of plugin implementation by such reference.
You can also view the other parts of the code file given by the links above.
If your plugin is expected to modify part of the existing files with its scaffold, you may use functions provided by sigs.k8s.io/kubebuilder/v3/pkg/plugin/util .
See example of deploy-image .
In brief, the util package helps you customize your scaffold in a lower level.
Notice that Kubebuilder also provides machinery pkg where you can:
Define file I/O behavior.
Add markers to the scaffolded file.
Define the template for scaffolding.
You might want for example to overwrite a scaffold done by using the option:
f.IfExistsAction = machinery.OverwriteFile
Let’s imagine that you would like to have a helper plugin that would be called in a chain with go/v4
to add customizations on top.
Therefore after we generate the code calling the subcommand to init
from go/v4
we would like to overwrite the Makefile to change this scaffold via our plugin.
In this way, we would implement the Bollerplate for our Makefile and then use this option to ensure that it would be overwritten.
See example of deploy-image .
Since your plugin may work frequently with other plugins, the executing command for scaffolding may become cumbersome, e.g:
kubebuilder create api --plugins=go/v3,kustomize/v1,yourplugin/v1
You can probably define a method to your scaffolder that calls the plugin scaffolding method in order.
See example of deploy-image .
Alternatively, you can create a plugin bundle to include the target plugins. For instance:
mylanguagev1Bundle, _ := plugin.NewBundle(plugin.WithName(language.DefaultNameQualifier),
plugin.WithVersion(plugin.Version{Number: 1}),
plugin.WithPlugins(kustomizecommonv1.Plugin{}, mylanguagev1.Plugin{}), // extend the common base from Kuebebuilder
// your plugin language which will do the scaffolds for the specific language on top of the common base
)
You can test your plugin in two dimension:
Validate your plugin behavior through E2E tests
Generate sample projects based on your plugin that can be placed in ./testdata/
You can check Kubebuilder/v3/test/e2e/utils package that offers TestContext
of rich methods:
NewTestContext helps define:
Temporary folder for testing projects
Temporary controller-manager image
Kubectl execution method
The cli executable (kubebuilder
, operator-sdk
, OR your extended-cli)
Once defined, you can use TestContext
to:
Setup testing environment, e.g:
Validate the plugin behavior, e.g:
Further make sure the scaffolded output works, e.g:
Delete temporary resources after testing exited, e.g:
References: operator-sdk e2e tests , kubebuiler e2e tests
It can be straightforward to view content of sample projects generated by your plugin.
For example, Kubebuilder generate sample projects based on different plugins to validate the layouts.
Simply, you can also use TextContext
to generate folders of scaffolded projects from your plugin.
The commands are very similar as mentioned in creating-plugins .
Following is a general workflow to create a sample by the plugin go/v3
: (kbc
is an instance of TextContext
)
To initialized a project:
By("initializing a project")
err = kbc.Init(
"--plugins", "go/v3",
"--project-version", "3",
"--domain", kbc.Domain,
"--fetch-deps=false",
"--component-config=true",
)
ExpectWithOffset(1, err).NotTo(HaveOccurred())
To define API:
By("creating API definition")
err = kbc.CreateAPI(
"--group", kbc.Group,
"--version", kbc.Version,
"--kind", kbc.Kind,
"--namespaced",
"--resource",
"--controller",
"--make=false",
)
ExpectWithOffset(1, err).NotTo(HaveOccurred())
To scaffold webhook configurations:
By("scaffolding mutating and validating webhooks")
err = kbc.CreateWebhook(
"--group", kbc.Group,
"--version", kbc.Version,
"--kind", kbc.Kind,
"--defaulting",
"--programmatic-validation",
)
ExpectWithOffset(1, err).NotTo(HaveOccurred())
Name Example Description
Kubebuilder version v2.2.0
, v2.3.0
, v2.3.1
Tagged versions of the Kubebuilder project, representing changes to the source code in this repository. See the releases page for binary releases.
Project version "1"
, "2"
, "3"
Project version defines the scheme of a PROJECT
configuration file. This version is defined in a PROJECT
file’s version
.
Plugin version v2
, v3
Represents the version of an individual plugin, as well as the corresponding scaffolding that it generates. This version is defined in a plugin key, ex. go.kubebuilder.io/v2
. See the design doc for more details.
For more information on how Kubebuilder release versions work, see the semver documentation.
Project versions should only be increased if a breaking change is introduced in the PROJECT file scheme itself. Changes to the Go scaffolding or the Kubebuilder CLI do not affect project version.
Similarly, the introduction of a new plugin version might only lead to a new minor version release of Kubebuilder, since no breaking change is being made to the CLI itself. It’d only be a breaking change to Kubebuilder if we remove support for an older plugin version. See the plugins design doc versioning section
for more details on plugin versioning.
The scheme for project version "2"
was defined before the concept of plugins was introduced, so plugin go.kubebuilder.io/v2
is implicitly used for those project types. Schema for project versions "3"
and beyond define a layout
key that informs the plugin system of which plugin to use.
Changes made to plugins only require a plugin version increase if and only if a change is made to a plugin
that breaks projects scaffolded with the previous plugin version. Once a plugin version vX
is stabilized (it doesn’t
have an “alpha” or “beta” suffix), a new plugin package should be created containing a new plugin with version
v(X+1)-alpha
. Typically this is done by (semantically) cp -r pkg/plugins/golang/vX pkg/plugins/golang/v(X+1)
then updating
version numbers and paths. All further breaking changes to the plugin should be made in this package; the vX
plugin would then be frozen to breaking changes.
You must also add a migration guide to the migrations
section of the Kubebuilder book in your PR. It should detail the steps required
for users to upgrade their projects from vX
to v(X+1)-alpha
.
Kubebuilder scaffolds projects with plugin go.kubebuilder.io/v3
by default.
You create a feature that adds a new marker to the file main.go
scaffolded by init
that create api
will use to update that file. The changes introduced in your feature would cause errors if used with projects built with plugins go.kubebuilder.io/v2
without users manually updating their projects. Thus, your changes introduce a breaking change to plugin go.kubebuilder.io
, and can only be merged into plugin version v3-alpha
. This plugin’s package should exist already.
Kubebuilder’s functionality can be extended through the use of external plugins.
An external plugin is an executable (can be written in any language) that implements an execution pattern that Kubebuilder knows how to interact with.
The Kubebuilder CLI loads the external plugin in the specified path and interacts with it through stdin
& stdout
.
If you want to create helpers or addons on top of the scaffolds done by Kubebuilder’s default scaffolding.
If you design customized layouts and want to take advantage of functions from Kubebuilder library.
If you are looking for implementing plugins in a language other than Go
.
The inter-process communication between your external plugin and Kubebuilder is through the standard I/O.
Your external plugin can be written in any language, given it adheres to the PluginRequest and PluginResponse type structures.
PluginRequest
encompasses all the data Kubebuilder collects from the CLI and previously executed plugins in the plugin chain.
Kubebuilder conveys the marshaled PluginRequest (a JSON
object) to the external plugin over stdin
.
Below is a sample JSON object of the PluginRequest
type, triggered by kubebuilder init --plugins sampleexternalplugin/v1 --domain my.domain
:
{
"apiVersion": "v1alpha1",
"args": ["--domain", "my.domain"],
"command": "init",
"universe": {}
}
PluginResponse
represents the updated state of the project, as modified by the plugin. This data structure is serialized into JSON
and sent back to Kubebuilder via stdout
.
Here is a sample JSON representation of the PluginResponse
type:
{
"apiVersion": "v1alpha1",
"command": "init",
"metadata": {
"description": "The `init` subcommand is meant to initialize a project via Kubebuilder. It scaffolds a single file: `initFile`",
"examples": "kubebuilder init --plugins sampleexternalplugin/v1 --domain my.domain"
},
"universe": {
"initFile": "A simple file created with the `init` subcommand"
},
"error": false,
"errorMsgs": []
}
In this example, the init
command of the plugin has created a new file called initFile
.
The content of this file is: A simple file created with the init subcommand
, which is recorded in the universe
field of the response.
This output is then sent back to Kubebuilder, allowing it to incorporate the changes made by the plugin into the project.
When writing your own external plugin, you should not directly echo or print anything to the stdout.
This is because Kubebuilder and your plugin are communicating with each other via stdin
and stdout
using structured JSON
data.
Any additional information sent to stdout (such as debug messages or logs) that’s not part of the expected PluginResponse JSON structure may cause parsing errors when Kubebuilder tries to read and decode the response from your plugin.
If you need to include logs or debug messages while developing your plugin, consider writing these messages to a log file instead.
kubebuilder CLI > 3.11.0
An executable for the external plugin.
This could be a plugin that you’ve created yourself, or one from an external source.
Configuration of the external plugin’s path.
This can be done by setting the ${EXTERNAL_PLUGINS_PATH}
environment variable, or by placing the plugin in a path that follows a group-like name and version
scheme:
# for Linux
$HOME/.config/kubebuilder/plugins/${name}/${version}/${name}
# for OSX
~/Library/Application Support/kubebuilder/plugins/${name}/${version}/${name}
As an example, if you’re on Linux and you want to use v2
of an external plugin called foo.acme.io
, you’d place the executable in the folder $HOME/.config/kubebuilder/plugins/foo.acme.io/v2/
with a file name that also matches the plugin name up to an (optional) file extension.
In other words, passing the flag --plugins=foo.acme.io/v2
to kubebuilder
would find the plugin at either of these locations
$HOME/.config/kubebuilder/plugins/foo.acme.io/v2/foo.acme.io
$HOME/.config/kubebuilder/plugins/foo.acme.io/v2/foo.acme.io.sh
$HOME/.config/kubebuilder/plugins/foo.acme.io/v2/foo.acme.io.py
etc…
The external plugin supports the same subcommands as kubebuilder already provides:
init
: project initialization.
create api
: scaffold Kubernetes API definitions.
create webhook
: scaffold Kubernetes webhooks.
edit
: update the project configuration.
Also, there are Optional subcommands for a better user experience:
metadata
: add customized plugin description and examples when a --help
flag is specified.
flags
: specify valid flags for Kubebuilder to pass to the external plugin.
The flags
subcommand in an external plugin allows for early error detection by informing Kubebuilder about the flags the plugin supports. If an unsupported flag is identified, Kubebuilder can issue an error before the plugin is called to execute.
If a plugin does not implement the flags
subcommand, Kubebuilder will pass all flags to the plugin, making it the external plugin’s responsibility to handle any invalid flags.
You can configure your plugin path with a ENV VAR $EXTERNAL_PLUGINS_PATH
to tell Kubebuilder where to search for the plugin binary, such as:
export EXTERNAL_PLUGINS_PATH = <custom-path>
Otherwise, Kubebuilder would search for the plugins in a default path based on your OS.
Now, you can using it by calling the CLI commands:
# Initialize a new project with the external plugin named `sampleplugin`
kubebuilder init --plugins sampleplugin/v1
# Display help information of the `init` subcommand of the external plugin
kubebuilder init --plugins sampleplugin/v1 --help
# Create a new API with the above external plugin with a customized flag `number`
kubebuilder create api --plugins sampleplugin/v1 --number 2
# Create a webhook with the above external plugin with a customized flag `hooked`
kubebuilder create webhook --plugins sampleplugin/v1 --hooked
# Update the project configuration with the above external plugin
kubebuilder edit --plugins sampleplugin/v1
# Create new APIs with external plugins v1 and v2 by respecting the plugin chaining order
kubebuilder create api --plugins sampleplugin/v1,sampleplugin/v2
# Create new APIs with the go/v4 plugin and then pass those files to the external plugin by respecting the plugin chaining order
kubebuilder create api --plugins go/v4,sampleplugin/v1
After creating a project, usually you will want to extend the Kubernetes APIs and define new APIs which will be owned by your project. Therefore, the domain value is tracked in the PROJECT file which defines the config of your project and will be used as a domain to create the endpoints of your API(s). Please, ensure that you understand the Groups and Versions and Kinds, oh my! .
The domain is for the group suffix, to explicitly show the resource group category.
For example, if set --domain=example.com
:
kubebuilder init --domain example.com --repo xxx --plugins=go/v4
kubebuilder create api --group mygroup --version v1beta1 --kind Mykind
Then the result resource group will be mygroup.example.com
.
If domain field not set, the default value is my.domain
.
klog instead of the zap provided by controller-runtime. How to use klog
or other loggers as the project logger?
In the main.go
you can replace:
opts := zap.Options{
Development: true,
}
opts.BindFlags(flag.CommandLine)
flag.Parse()
ctrl.SetLogger(zap.New(zap.UseFlagOptions(&opts)))
with:
flag.Parse()
ctrl.SetLogger(klog.NewKlogr())
You can enable the leader election. However, if you are testing the project locally using the make run
target which will run the manager outside of the cluster then, you might also need to set the
namespace the leader election resource will be created, as follows:
mgr, err := ctrl.NewManager(ctrl.GetConfigOrDie(), ctrl.Options{
Scheme: scheme,
MetricsBindAddress: metricsAddr,
Port: 9443,
HealthProbeBindAddress: probeAddr,
LeaderElection: enableLeaderElection,
LeaderElectionID: "14be1926.testproject.org",
LeaderElectionNamespace: "<project-name>-system",
If you are running the project on the cluster with make deploy
target
then, you might not want to add this option. So, you might want to customize this behaviour using
environment variables to only add this option for development purposes, such as:
leaderElectionNS := ""
if os.Getenv("ENABLE_LEADER_ELECTION_NAMESPACE") != "false" {
leaderElectionNS = "<project-name>-system"
}
mgr, err := ctrl.NewManager(ctrl.GetConfigOrDie(), ctrl.Options{
Scheme: scheme,
MetricsBindAddress: metricsAddr,
Port: 9443,
HealthProbeBindAddress: probeAddr,
LeaderElection: enableLeaderElection,
LeaderElectionNamespace: leaderElectionNS,
LeaderElectionID: "14be1926.testproject.org",
...
If you are facing the error:
1.6656687258729894e+09 ERROR controller-runtime.client.config unable to get kubeconfig {"error": "open /var/run/secrets/kubernetes.io/serviceaccount/token: permission denied"}
sigs.k8s.io/controller-runtime/pkg/client/config.GetConfigOrDie
/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.13.0/pkg/client/config/config.go:153
main.main
/workspace/main.go:68
runtime.main
/usr/local/go/src/runtime/proc.go:250
when you are running the project against a Kubernetes old version (maybe <= 1.21) , it might be caused by the issue , the reason is the mounted token file set to 0600
, see solution here. Then, the workaround is:
Add fsGroup
in the manager.yaml
securityContext:
runAsNonRoot: true
fsGroup: 65532 # add this fsGroup to make the token file readable
However, note that this problem is fixed and will not occur if you deploy the project in high versions (maybe >= 1.22).
When attempting to run make install
to apply the CRD manifests, the error Too long: must have at most 262144 bytes may be encountered.
This error arises due to a size limit enforced by the Kubernetes API. Note that the make install
target will apply the CRD manifest under config/crd
using kubectl apply -f -
. Therefore, when the apply command is used, the API annotates the object with the last-applied-configuration
which contains the entire previous configuration. If this configuration is too large, it will exceed the allowed byte size. (More info )
In ideal approach might use client-side apply might seem like the perfect solution since with the entire object configuration doesn’t have to be stored as an annotation (last-applied-configuration) on the server. However, it’s worth noting that as of now, it isn’t supported by controller-gen or kubebuilder. For more on this, refer to: Controller-tool-discussion .
Therefore, you have a few options to workround this scenario such as:
By removing the descriptions from CRDs:
Your CRDs are generated using controller-gen . By using the option maxDescLen=0
to remove the description, you may reduce the size, potentially resolving the issue. To do it you can update the Makefile as the following example and then, call the target make manifest
to regenerate your CRDs without description, see:
.PHONY: manifests
manifests: controller-gen ## Generate WebhookConfiguration, ClusterRole and CustomResourceDefinition objects.
# Note that the option maxDescLen=0 was added in the default scaffold in order to sort out the issue
# Too long: must have at most 262144 bytes. By using kubectl apply to create / update resources an annotation
# is created by K8s API to store the latest version of the resource ( kubectl.kubernetes.io/last-applied-configuration).
# However, it has a size limit and if the CRD is too big with so many long descriptions as this one it will cause the failure.
$(CONTROLLER_GEN) rbac:roleName=manager-role crd:maxDescLen=0 webhook paths="./..." output:crd:artifacts:config=config/crd/bases
By re-design your APIs:
You can review the design of your APIs and see if it has not more specs than should be by hurting single responsibility principle for example. So that you might to re-design them.
If you’re seeing this page, it’s probably because something’s not done in
the book yet, or you stumbled upon an old link. Go see if anyone else
has found
this
or bug the
maintainers .