PKS 1.1 is now Generally Available!

Stop reading this right now.   Go here, download the bits.   Go here, read the docs.

Talk is cheap, shipping code talks, and customers sing.

If you want a great summary of the PKS 1.1 release – you can get it here.

A huge congratulations to the VMware and Pivotal product team that work so hard on PKS.   Building a great product is never easy.  Doing it across companies isn’t easy.   But we have, we are, and we will continue to do so.   It’s the right thing for the customers.   I’ve seen this before (see VxRail) – and the juice is worth the squeeze.

What is PKS aiming to do, what is it aiming to be?   What’s the team’s passion?

At the highest level, PKS aims to the be the gold standard of Enterprise multi-cloud Kubernetes.   We aim to manifest the k8s APIs as a dial-tone, and do it in a way that meets the needs of enterprises for platform they can consume, that are sustainable, and don’t become fragile over time.   We want it to be a platform, an abstraction that our customers, their operations teams, their developers LOVE – just like we have done for the Pivotal Application Service, for the Spring ecosystem, with the VMware SDDC stack, and more.

This goal manifests in 4 themes that we think of every single day as we build PKS:

  1. Turnkey solution. All the things you need to use and operate a container runtime. In one package. On every cloud.
  2. Enterprise readiness, control and security. Continually updated platform, embedded OS, secure container registry, policy-driven networking, integrated IdM. Controllable and customizable by service plans.
  3. Developer empowerment. Consume app services, popular tools “just work” with vanilla Kubernetes. Developers get the Kubernetes they want, for their choice of workloads.
  4. Multi cloud. Run it on any infrastructure of choice. On premises or in the public cloud.

Here are the high notes of what’s in PKS 1.1:

  • Kubernetes 1.10
  • Multi-AZ
  • Multi-Master with Multi-AZ (beta feature)
  • Centralized Access Management, big steps up in RBAC and LDAP integration
  • Network isolation at pod, node and cluster level… Including NSX-T 2.1 integration, and automated mult-az loadbalancing – all integrated and automated (simple = cool).
    • Aside: we’re continuing to simplify NSX-T, including Concourse CI automation pipelines – come to VMworld and see a lot more – the speed of innovation, integration and partnering here is accelerating.
  • Flexible Network Topology Choices for Kubernetes Node Networks
  • vRealize Log Insight Integration
  • Wavefront by VMware Integration
  • Support for Harbor 1.5.1
  • support for Persistent volumes, including Hatchway (including AZ failover and policy-based preference)
  • and much, much more….

… and there’s also broad integration with the whole K8s ecosystem.

Just because I didn’t list something, don’t assume that it doesn’t work, you should assume that anything that works with K8s works with PKS. We use Sonobuoy as part of our conformance testing, as well as other open source ecosystem tools.   We’ve done a lot of work with Portworx and many others in the persistence work, with Prometheus and a lot more.   And… A ton of GREAT ISVs that are building containerized variants of their stacks are working with us – examples like Redis Labs, CrunchyData, Confluent  (Kafka), IBM (Websphere Liberty and MQ), and too many to list.   If you are an ISV, if you do something cool that runs in a container has a Helm chart, and want the BEST enterprise Kubernetes platform to run on, something you can count on just being their, nice, happy and boring so your software can do it’s thing – drop us a line, we want to partner with you!

Aside: one of the design “pillars” of PKS that is that we use native k8s, that we aim for constant compatibility with Google Kubernetes Engine, the “gold standard” of off-premises, scaled Kubernetes.    Here for example, the June 18th update of GKE added k8s 1.10.4, and 14 days later, there’s a PKS release.   No forks, and contributions, when needed, get committed upstream.   This means PKS = “vanilla, but fast and current K8s dialtone”.    

IMO, this is important – K8s itself, and other early projects that are linked and build on top of k8s like (but not limited to) Istio are moving fast.   People that design their platform such that they: a) stay current with upstream k8s; b) update the platform frequently and easily; c) keep as close to Google and native upstream k8s – well, I think they will be in a world of hurt over time, and so will their customers.    I’m going to do another post on this in a while.

To make this real, here are some example demonstrations (thank you Dan Baskette + the Pivotal PEZ team!)…. I will update this post over the coming days with more examples.

1) Deploying PKS 1.1 in a multi-AZ pattern…

2) Using PKS 1.1 and NSX to automate a ton of objects/policies, automatically create network isolation and inter-AZ loadbalancing…

3) Using PKS 1.1 and vRealize Log Insight for SIMPLE, but deep K8s logging (including cluster, pod and all sorts of granularity)…

4) Using PKS 1.1 and VMware Wavefront for SIMPLE, but deep K8s observability and performance management…

What’s next for PKS?

  • We will continue with a roughly monthly release cadence.
  • We will continue to deepen platform observability
  • We will continue to deepen security/authentication – more granularity, more roles
  • We will continue to expand use cases (GPU)
  • We will continue to expand the IaaSes supported.

… and we will do it all with a pattern of focusing on the customers, building on the themes – and delivering a platform they love.

What’s the bigger picture?

I’ve been at this role at Pivotal now for 3 months, and I’ll tell you want I’ve found at the most “meta” level:

We believe ALL customers need multiple abstractions.   Do you?  

  • Customers have all sorts of workloads.
  • Customers have apps with different requirements.
  • Customers have developers that use different frameworks, tools, and have different skillsets.

… And while the I constantly get “once I use containers, I’m done, right?” (sigh, no)… people grapple with the fact that you want to use the highest order abstraction that you practically can (key word being “practically”) – in practice, each abstraction has it’s place.

If you answer yes – then you want a great “dial tone” for each abstraction.  You don’t care about what’s below the abstraction.   I’ve seen some infrastructure-centric people think that the answer is one MEGA layer, and it’s not that that’s intrinsically bad, UNLESS it triggers a whole pile of trade-offs – and ends up a flying boat (moderate at multiple things, bad for each).   

Pivotal Application Service is the gold standard of a multi-cloud app service abstraction.   VMware’s and Pivotal’s Container Service is the new gold standard of a multi-cloud Kubernetes abstraction.   The VMware SDDC stack is the gold standard of the multi-cloud kernel-mode VM IaaS abstraction.   We will be doing the same for Functions/Event Stream use cases – stay tuned for more.

We believe that ALL customers will have a multi-cloud model, including on-premises.   Do you?

In my experience, this is a fact: there are economic, security, and governance requirements that are driving the necessity of a hybrid and multi cloud model across almost all customers.   If you dispute this, I want to hear, discuss, but unless you’re a uni-app startup, it’s just not the reality I see.

If you answer yes – then you want abstractions that are able to run on and off-premises in very consistent ways.

No, of COURSE there won’t be one “unifier of all cloud, on all abstractions” (the human search for simple answers is built-in to us).   

 Yes, of COURSE we will all use more public cloud IaaS, PaaS, CaaS, SaaS and other data/API surfaces over time – certainly more than today. 

But I will vehemently argue that we will see a blend of on-premises and off-premises, and there will be multiple clouds in the market.   Pundits aside, this is increasingly a widely understood fact.    PKS (and PAS, and PFS) supports a consistent multi-cloud model – with total “dial tone” consistency.   Easy for others to say/claim – but we do it for real.

We believe it is more important to be “Fast Forever” than it is to be first.   Do you?

What do I mean by “Fast Forever”?  If you’re spending time building platforms, you’re not going to be fast forever – this is PARTICULARLY if there’s a lot of very fragile glue (scripts/automation).   If you want to be “Fast Forever”, you need to think of a culture of sustainability.  If you want to be “Fast Forever” you need to be able to ship frequently, update without blinking.  It’s true of the app, it’s true of the platform itself.   If you want to be “Fast Forever”, you need to think about how your people work, and where you push them to NOT work.    And yes, if you want to be “Fast Forever”, you need to have platforms that update themselves either as a service or via a CI pipeline – and those platforms need to hold to critical abstractions between layers like religion.  This means that certain the RIGHT platforms – those that will support “Fast Forever” are going to be “nope, we don’t do that”.

I’ll give you an example.

Multi-master, multi-AZ behavior in PKS is different.   We have designed how we implement this in a way that we’re able to do this with hundreds of k8s clusters.  Why is this important?   Because, anyone who is serious about k8s in the enterprise knows that the idea of a single massive k8s cluster and use of namespace for isolation, tenancy, authentication is just flat out wrong.    If you want to argue this, go ahead.  I’m not going to argue with you (I’ve found this is a waste of time).   I’m going to say “go for it, and let’s compare notes a few years from now”.

The question isn’t do you do “multi-master/multi-AZ”, it’s “do you multi-master, multi-AZ (and all the other things), with many clusters, with the associated tenancy constructs, network isolation, AND can you do it through many platform updates… so you maintain ‘dial-tone’ and be ‘Fast Forever’?”

If you answer yes – you want platforms that PROVE their ability to sustain their dial-tone, and do it in the way that enterprises are used to.  VMware has proved they deserve that shot via what they have done with the VMware SDDC stack over a decade.  Pivotal has proved we deserve that shot via what we have done with with PAS over almost a decade.  Together, we are proving it with PKS – and PKS 1.1 is a huge milestone.

We believe that in the end – it’s about the outcome.   Do you?  

Here’s where the time spent in the trenches with our key customers with 1.0.x and 1.1.x over the last few months has been priceless.  We’re listening.  We’re learning.   We’re iterating faster and faster – together.   We’re deploying all sorts of differing workloads.  In the end, this is what matters, nothing I write, nothing we ship.

It’s the early days of this journey, but today is a day to celebrate, thank our customers, thank each other on the product teams – and to our respected competition (whether it’s just in the container/k8s space, or more generally)… we’re just starting to warm up our engines 😊








1 thought on “PKS 1.1 is now Generally Available!”

Leave a Reply

Your email address will not be published. Required fields are marked *