PKS 1.3 – Happy New Year, and peak into 2019!

Here’s the TL:DR edition (still long winded).

  • Azure supports completes the hand.   Adding Azure support completes the “Royal Flush” winning hand of a common PKS control plane, common K8s experience on the target of your choice (and in fact any mix of them) – vSphere on prem, AWS, GCP, and now Azure.   PKS has common observability, control surface APIs, logging experiences – and of course constant common k8s release and container run-time support, regardless of the IaaS you choose.
  • Constant compatibility – proven in a sustainable way.   PKS supports Kubernetes 1.12.   I think we can now say that we have proven our philosophy of constant compatibility and total alignement with upstream native Kubernetes.   It has held true for a year, and we are within a couple of months of a k8s release going GA, and the FIRST to offer that common k8s release on the clouds of your choice.   Our metric of “are we fast enough” used to be GKE.   We’ve now found that we’re running faster than GKE in K8s updates – and needed to relax our internal restriction of “don’t release until it’s GA on GKE”. I’m insanely curious what’s behind the GKE cadence slowdown (and have no idea).
  • Simple, easy platform upgrades – proof is in the pudding.   We’re now in the era where K8s CVEs are coming fast and furious.    There was this one (solved in PKS 1.2), then this one (solved in PKS 1.2.5)… and yes – there will be more.     CVEs are NORMAL. We also live in a glass house, our software isn’t perfect either. This isn’t scary… unless you can’t upgrade easily, unless you don’t have an automated pipeline that updates the platform in prod without you blinking (at which point you are permanently terrified). If you can maintain this yourself, great. If you look yourself in the mirror and say “I can’t” or “I don’t want to – I have other things to work on”, we can help you.   Harbor cracks the nut for CVEs in your container (PKS includes Harbor 1.7.1 – lots of goodies there which you can read about here).   PKS via CFCR/BOSH cracks the nut for CVEs in your host OS.  The PKS control plane cracks the nut for PKS itself.
  • Fleet cluster management – proof is in the pudding. I see more customers every day that nod their head that k8s clusters remain the best tenant boundary. The counter arguments generally involve resource bin packing and management burden. I think there are ways to cover the management burden (we certainly aim to do that – and projects like ClusterAPI have a lot to learn from what we’ve done, and EQUALLY vice-versa). That’s also not to say the k8s community won’t make progress on namespaces for tenancy boundaries at the same time – it will, and we will do our part to help. But right now, putting aside exceptional cases, I feel pretty confident you’re going to have lots of k8s clusters. You’re going to need to operate them (with all the elements that go with that around observability, authentication, networking, etc). Smarter people than I agree. You cannot deny the merit of the position when people like Jessie Frazelle chime in like she has here late in Dec in response to a Pivot’s post that is worth reading here. But… beyond tenancy questions, there’s also considerations around blast radius. I thought it was awesome that Target shared some of their recent experiences in this domain. Highly encourage you to read this. The comments thread is also useful (always encouraging people to comment, contribute, discuss). I’m constantly debating this with some customers (you know who you are, and we love you, even though we think you are wrong here).
  • K8s, the way you want.   “PKS” is shorthand.   “VMware PKS” = “Pivotal PKS” = “PKS”.   People get confused by this sometimes, but you wouldn’t be if you were inside the product team.  PKS is completely a joint effort.   If you’re in the product team – your VMware badge or Pivotal badge fades to black, and your focus is on making PKS great.  A bit more on the “way you want” is in the “what’s next in 2019” below.. 
  • Continuing focus on operations.  Check out the new smoke tests in PKS 1.3. Creation of ephemeral k8s clusters, running tests, and only then upgrading prod clusters (and then shutting down). Nice. Check out the LONG list of networking improvements (a huge set of items in the PKS customer-intent led backlog are around this intersection of the network and k8s). There’s a lot more – but it’s rooted in the philosophy of delivering real platform value, not just a bag o’ bits.

I’d encourage you to read this blog post by my sister Elisabeth Hendrickson who leads the Pivotal part of the PKS R&D team with her amazing brother Brad Meiseles (and their shared awesome extended team) – BTW, the “Royal Flush” analogy seems to have come up to us independently 😊 

I’d also encourage you to read this blog post by Narayan Mandaleeka one of the awesome VMware-badged product leads on PKS. There are a TON of additions in PKS 1.3 that he details at length.

I made these hoodies for “Team PKS”.  

The hoodie is exclusive 😉! You can’t buy one, you need to earn it – they are available for people that have them are the combined VMware/Pivotal team that is the driving force behind PKS; for customers who are in prod and sharing their stories (good and bad); and for the VMware and Pivotal field teams that help those customers.  In 2019, will extend that to channel partners who are supporting PKS.   You are ALL part of “Team PKS”.

I want to say one pithy soundbite – but there’s a LOT embedded in this simple statement (IMO).   Smart people with important things to do don’t build platforms.   They recognize that the word “platform” = “the point where you decide to consume ‘below’ and construct ‘above’”.   Building your own container platform is fun, interesting – so from a learning standpoint, GO TO TOWN. I am arguing that from a business standpoint, building your own container platform is a totally stupid, silly waste of time.   It’s a reasonable choice to not choose us (I get that) – and you have a reasonable set of choices (fewer multi-cloud enterprise ready choices). DIY is a total waste of time, and you simply won’t be able to sustain it.  

Argue with me (seriously, please – let’s discuss – am I missing something?) but only after you’ve had the platform in prod, and sustained it for a year or two.

So moving on from PKS 1.3… What should the market expect from us in 2019?

  1. We will keep up this cadence, in fact, I think we will go even faster.   If you’re a competitor – hang on to your hat, take a deep breath, and get ready.   If you’re a customer – celebrate, this is good for you, the whole ecosystem is moving fast. But remember – think about how you look at your platform through that continuous delivery lens – can you constantly evolve it, or is updating parts/whole of it intimidating?
  2. We are now spending a lot of time and effort on “how do we make PKS awesome for thousands then tens of thousands, and then hundreds of thousands of vSphere customers”.   The vSphere plugin fling (now at v 1.0.1) is just the start.   PKS will be available to all VMware channel partners early this year.   Warm up your engines.
  3. We will keep improving the common multi-cloud experience.   A big part of this is NSX-T value becoming multi-cloud itself.   Today, one difference in deployment is that the SDN layer varies based on IaaS (i.e. one uses the LBs that are native to the IaaSes).   We’ll solve this (keeping it open of course, but for those wanting a common experience, making that simple and obvious).
  4. We will keep extending PKS – currently there is an Enterprise software variant (VMware PKS/Pivotal PKS) you control, and a SaaS Cloud variant (VMware Cloud PKS) you consume.  We can imagine another variant based on customer feedback.  These share important code, but more importantly important philosophies – including total, utter focus on upstream Kubernetes and the community. I think our new Heptio friends can help us a lot in work to bring some things that we do to the community at large, and some SIGs where we can do more.  Stay tuned.  These efforts are all PKS.
  5. Exciting brave new world of dev value on TOP of K8s. Lots of CNCF efforts here – and a good opportunity to contribute, and also to curate.   Remember if you think of VMware and you think of “leading software defined infrastructure”, when people think of Pivotal, they think of “amazing developer experiences, tools and techniques” – and we’re bringing that to k8s.   People are realizing that there is a huge delta between the developer value of “cf push” and “kubectl create”.   Something to consider:  Pivotal is the driving force behind the amazing open Spring community.   Spring is the leading dev framework, the source of modern Java.   Spring Boot = build everything.  Spring Cloud = coordinate everything.  Spring Cloud Data Flow = connect everything.   Our efforts around PFS/Knative, Istio, Concourse/Spinnaker, open Buildpacks and more…  Over time (this won’t happen overnight) – we will keep making the developer experience on top of k8s simpler (not only for functions, but all sorts of stuff) and adding value in a way that is open (in every direction). I think there’s a lot of work to do in this domain, and I think the whole ecosystem will make progress on it in 2019…. but it will all be predicated on “assuming you have a rock-solid, enterprise k8s you can continuously iterate on and count on, then….”, and PKS will keep doing that better and better every day.

As always:

  1. Thank you.
  2. Comments welcome! Debate, discuss – it makes us all better.
  3. Code speaks – get the bits here: https://network.pivotal.io/products/pivotal-container-service

Leave a Reply

Your email address will not be published. Required fields are marked *