PKS – what’s next? (ending with a fun new toy for you)

PKS is proving to be a success.   It’s not just us that feel that way!   CRN just awarded their 2018 Innovator Awards – and PKS was a winner!   Read here.

Ultimately customers and the market choose – but the original formula of:

  • stay aligned with and contribute to upstream k8s, Google, and the community
  • focus on making k8s lifecycle and delivering the platform surfaces through rapid “platform as product” continuous delivery so customers can keep up with the quarterly k8s cadence and not get crufty
  • accept, embrace and solve the challenges of managing fleets of k8s clusters
  • make “multi-cloud” part of the product
  • make the platform open – but also complete, with deep integration with NSX-T as one example of many.

That said – we know we have a lot to do.

I love working with the product team as we work on prioritization, work through technical problems, as we debate the next areas to tackle.   Walking through the backlog aha board for PKS is like taking a tour inside the collective minds of the growing pantheon of customers – you can see their input shaping where the VMware and Pivotal teams go next.

Sidebar: It’s funny – in my career, I always end up in wonderfully strange jobs.   I think the pattern might be caused by my own weirdness.   The roles always bridge the engineering/field worlds – and I’ve found out that even when the role is geared one way or another (my former GM role of the converged business at Dell EMC was geared more inwards, the current role at Pivotal is geared more outwards), I find ways to twist the role I’m in so I blend both.

I think the reason is twofold: 

  • If I’m honest with myself, the main reason is my own messed up psyche – I love the role of the “SE” it’s my natural home. SEs everywhere (at Pivotal the role is called a “Platform Architect”, sometimes this role is called an architect, it has all sorts of names) sit at the nexus of the technology and the market.   Find an SE, and listen – you’re likely to get the best feedback you could ask for.  People (including me) often revert to their own true nature.
  • The second reason is because it’s that intersection of product and field where so much magic can happen (or not happen). It’s an opportunity for “impact”.   When you see great ideas fail, it’s often because they didn’t get that intersection working.

With PKS 1.2.x and NSX-T 2.3.x we crossed a critical “ready for the big leagues” mark – but there’s a lot to do.

So – what’s next for PKS?   One person’s opinion in 5 bullets:

  1. Continued work on fundamentals. While PKS 1.2.x is ready for production at customers of all types – we shouldn’t kid ourselves: we are still in the “mid hype-cycle” of k8s in the enterprise.   As we dig in, there are a ton of things you discover.   Examples include things like: making pod security policy controls work better; making telemetry/observability work better; windows container support; expanding RBAC roles.   This “work on fundamentals” also includes NSX-T – which is always “batteries included” with PKS.   There is a lot of great stuff coming in the near term NSX-T roadmap, and it’s intersection with PKS – like multiple T0 configs, more automation, big scale leaps for endpoints and LBs, more network profile flexibility… a LOT.   And, of course, we have to keep proving, over and over, that we have a platform that can reliably update itself to keep up with k8s – in a way that customers can count on, in the middle of production, without even blinking.   Kubernetes 1.12 is coming soon to a PKS near you!
  2. Expansion and improvement of multi-cloud coverage. I can see CFCR (a component in PKS) drumbeats that show where we are on Azure support.   I’m feeling really good that soon we will be “complete” with “built in” (vs. a lot of scripting the customer owns) support for all the cloud IaaS that matter.   But – even once we “tick all the boxes”, there’s a lot to do to make the support work better (and NSX-T support on every cloud will go a long way – VMware NSBU team, this is important 😊).
  3. Doubling down on the k8s core.  There are things that we can only do properly through contribution to upstream k8s.   Examples that jumps to mind are work on ClusterAPI, kubeadm and how they could work with BOSH (and how they work in the absence of BOSH, or how these ideas can come together), pod security policy, federated k8s clusters and much more.   The examples extend to other graduated projects like Prometheus.   I’m really excited about what the addition of the Heptio team (when the transaction closes) can do here – but that doesn’t absolve the existing VMware/Pivotal team to need to keep pushing hard and contributing.
  4. Work on what’s below, beside, and on top of PKS.
    1. “Below” = lots of work to continue to improve the CSI/CNI ecosystem and the interactions with PKS. In spite of what people think – PKS is wide open to plugins and other endpoint extensions.   We will always take a posture of “integrate value, but always maintain openness”.  For example Wavefront, vRealize Log Insight, vRealize Automation are deeply integrated yes, but not required.   This is intentional   We are proud of the great SDN that is NSX-T and it’s unique value and integration with PKS – solving problems in unique and great ways.   But – we know customers want choice.   Lots of work underway here to help customers that want to use PKS with the broad ecosystem.
    2. “Beside” = there’s a pattern of the CNCF tools people use with PKS – from Graphana, Prometheus, fluentd, ELK, etc. Add to that the great OSS tools where Heptio is leading the charge.   I think we can do a better job of of thinking about how we package/support these tools.  Internally, I am NOT advocating that we tightly couple these “beside” ingredients by “baking them in”.   The “beside”/”below”/”on top” CNCF ecosystem should have low/no bindings to PKS itself – but we can do work for the common deployment patterns to be better documented and easier to use.   This is one of the design principles of PKS vs. others in this space…   PKS tries to do one thing really well (k8s fundamentals) and being really open to the ecosystem tools (and the “on-top” ISV/Developer ecosystem).   It’s not the only way to roll.   Of course there’s the RedHat example, but I’m thinking of others.  There are some really, really interesting things people are doing (and I don’t begrudge them).   Examples are things like Robin.io that tightly integrate the “below”, “beside” and “on top” around solving specific (real) challenges with data/persistent workloads on k8s.   I don’t AGREE (I think over time, things get “crufty” when you try to do too much) – but I get it.  I think the way to solve those problems is in a way that doesn’t create strong bindings.
    3. “On Top” = loads of interesting things here that I mentally group into two piles.
      1. The ISV ecosystem. We’re at the point where there are 8 partners with whom we’ve collaborated on their Helm charts/K8s operators on PKS (LINK).   Our posture to all ISVs is simple – we will always be “vanilla” K8s – so anything they do with us is wide open, and will work on any other k8s.   Lots to do here.
      2. The developer value on top of k8s. While people can build their own developer toolchain and pipelines on top of k8s (of course), and often use tools like CloudBees (PKS packaging here) – it’s a long way today relative to the beautiful developer value of Cloud Foundry.    Customers tell us this loud and clear.   We’ve shown our hand – we think this is an area of a lot of work to do with Knative, Istio, open buildpacks, Spring Cloud and a lot more.   This won’t happen overnight, but lots to do!
    4. Doing the things to get to tens of thousands of customers.

That last one is particularly interesting to me.   I learnt a lot about scaling products and GTM over the last few years, and you need something to get super-easy if you want it to scale.  Right now, we have the “machinery” (product, GTM, packaging, ecosystem) to get PKS from hundreds to thousands of customers.   We DON’T yet have it figured out on how to get to tens of thousands.

Some people use the term “vSAN simple” to shorthand all the things that are needed for a product to work that way.   I’ll remind people that vSAN wasn’t always so simple, it took the vSAN team a lot of hard work to make it that simple 😉

At VMworld Barcelona we announced that the VMware channel will be enabled and have the competency program, registration mechanics for PKS in Q1, and the team is working furiously to get there.   This represents the stuff “around” a product to help it scale.

Then there are the things we need to do in the product itself – and we need to think through all the ways people want to consume k8s.   Today, this is as multi-cloud software they control (VMware PKS), and soon (as beta closes) as a multi-cloud service they consume (VMware Cloud PKS).   There’s lots of great opportunities to keep making these simpler, simpler and simpler.

I’ll give you a current, and real example of making the product scale better.

One of the main points of VMware PKS is that it’s definition of multi-cloud presumes that one of the main clouds is the vSphere cloud most customers have on premises.  Yes, most of our PKS customers use more than one cloud (vSphere + AWS/Azure/Google), but very few use VMware PKS only on public clouds (those that do, they tend to prefer the service model of VMware Cloud PKS).

So… Making VMware PKS work better for all the vSphere administrators out there is really important!    Check out this VMware PKS web UI plugin for PKS, which is a fling that is available for you right now!

You can download the bits and the docs here.   Have fun!   Thanks to the team for sharing the VMware fling publicly!

The VMware PKS future is so bright, you’ve got to wear shades – and hang on, we’re iterating fast!    Feedback is ALWAYS welcome – what do YOU think we should be prioritizing?

2 thoughts on “PKS – what’s next? (ending with a fun new toy for you)”

  1. So, how does the multi-cloud PKS with vSphere and AWS look like? Do you need VMware Cloud on AWS, or you can just use “standard” AWS infrastructure?
    What other components are required to extend on-prem PKS deployment to be able to use public cloud for additional capacity? HCX?
    Is there any PKS multi-cloud reference design guide available?

    1. Sorry for the delay, and thank you for the question. PKS on AWS (and GCP and soon Azure) runs directly on the standard IaaS services of the public clouds. It does not require VMC. Today federated Kubernetes clusters are a no-go, so the deployment pattern are to deploy foundations (single AZ or multi-AZ) on premises or off premises, and have a common control plane, observability, RBAC etc in all cases. You don’t stretch the network.

      There are PKS deployment docs for each of the IaaS you can deploy on here: https://docs.pivotal.io/runtimes/pks/1-2/#preparing

      Over time, multi-cloud NSX-T will be a cool option…

Leave a Reply

Your email address will not be published. Required fields are marked *