I was quoted as the time for “bar fights” about whether HCI and software defined storage was ready is now over here.
It was a weird “Chad analogy” – but “bar fights” represents the silly, stupid – and ultimately fruitless – aggression/hot air venting that we’ve all seen in our lives at one point or another.
I find that a ton of energy is wasted on arguments that frankly I think are past the point of being relevant. Want some examples?
- For years, people furiously argued about the fit of SDS and HCI and whether there were some workloads that were “intrinsically” not a fit. Then all of a sudden, the debate became tired and frankly silly.
- Think of how much ink was spent on KVM vs. ESX vs Hyper-V battles.
- Think about how much blood, sweat, tears (and time and money) was wasted on OpenStack in the enterprise (OpenStack continues to live on in the NFV world, but has dwindled to low relevance in the Enterprise). Yes, someone will come screaming at me on this topic, and I invite the dialog – but I think it’s become sufficiently clear to most people involved where it’s heading.
- Think of the time spent on the kernel-mode VM vs. Container debate – and finally people are realizing it’s “meh” – sometimes you use one or the other, but most often, you use both (yes, with shifting value of each layer dependent on use cases).
- The debate on container/cluster managers seems to be settling down too – remember it wasn’t so long ago where it was all about Docker, all the time, and people spending a lot of ink about Docker having valuations in the billions apon billions. Then people realized the container itself wasn’t particularly relevant (it’s very important, and a key abstraction, and Docker has become the defacto enterprise container standard). However, the container toolset (registry, management, app performance management, container lifecycle, declarative tools for the container/cluster management, telemetry) matters more. Now, we’ve moved on to the point where Kubernetes has settled as the defacto standard in the container/cluster management domain – and the industry coalesces once again.
- I believe we’re nearing the end of the stupid debates on public vs. private cloud and the relative value (hint, it’s not about “cheaper” or “security” – rather it’s about agility, different economic models, data gravity, workload fit, governance) – but MAN that “bar fight” consumed a lot of energy.
I **think** each of these “bar fights” are rooted in the fact that people LOVE to take sides, love to argue – particularly where it doesn’t actually matter, and therefore the debate is safe.
It’s also safe to be polarizing early on (and this brings “thought leadership” eagle scout badges to those who take those positions) – because it’s too early to be “fighting momentum” – the loud voices create momentum.
The debate (whatever it is) starts to fade to black when things actually become most important – and people move on to a more pragmatic mode of “getting things done”, and there attention starts to shift to a new battleground.
The latest battleground I’m seeing is about “Digital Transformation”. Everywhere you turn, someone has a tool/product/platform that will “transform you digitally” in some form or another. I see a great ad campaign from ____ (an unnamed long time legacy software vendor) that suggests they will digitally transform you, just trust their tools.
I think this is entirely the wrong debate.
It’s not about one tool/product/platform or another. Don’t get me wrong – tool/product/platforms are important – BUT anyone who says they have a “product widget” answer that will “digitally transform you” – well… I would be suspicious.
Likewise, it’s not about using magic words that somehow make you cool enough to play with the new kids. I see strategic consultants that bring out PowerPoint after PowerPoint and put together the magic words in the right order, and sadly that gets some traction too.
Like every change, I think “Digital Transformation” is about making hard choices about what you choose to do and what you stop doing. There’s only a finite amount of time, money, people – finite resources – and therefore getting crisp and clear here is important.
Put otherwise, it’s about changing what you construct and what you consume. This observation has been one that I’ve noticed over, and over again, and in different layers of the stack.
It’s about challenging the status quo of HOW you do things – and that in turn comes down to challenging the status quo of the HOW of people/process, and culture/organization.
Change is about leadership and people.
Here’s a picture that shows what I’ve found to be the “pattern of success” of that I see in Enterprises as they navigate these changes. Don’t feel alone – like all Enterprises – none are “born in the cloud” natives and each of have real, material technical debt they need to keep working, and will be working for years.
What this picture shows is a couple of things:
- Chad Sakac is a terrible artist, and needs to work on this.
- Some “ground rules” of navigating these transition, and the common “state stacks” in Enterpises and the relationships between them. I’ve synthesized this from a LOT of customer interaction (both positive and negative) – a lot of learning.
Here’s more on #2.
- There are multiple “state stacks” where the whole stack has characteristics (only some listed) and supporting organizations that start from the business process and supporting application themselves. These “state stacks” have INTRINSIC, FUNDAMENTAL differences between them.
- Everyone needs to stop wasting time dismissing one “state stack” or another – and also dismissing the associated behaviors/technologies and tools that they don’t like. Rather, embracing the fact that you absolutely will have multiple domains for years to come – it’s freeing. I like the Gartner “bi-modal” idea – and this echoes that idea. I only take exception with the fact that in my experience, most enterprises don’t have two “state stacks” (or two modes), rather they have multiple “state stacks” – so “n-modal” would be more correct. One ground rule is that you will have multiple operational models for the different eras of application stacks (and the implications they create).
- Embrace that the differences in in the “state stacks” of each domain are REAL. Debates that the way that part of your world works vs. another are likely silly, and a waste of time. We’ve all likely seen this manifest as the new kids and the old crew at loggerheads, each thinking the other is completely wrong. If you try to apply CI/CD principles to your legacy SAP ERP system without shifting the whole app architecture – you’re going to be in a world of hurt. This means leaders need to shut down the internal battles as people in different operational models, tools, and processes tend to sling poop towards each other, which is a waste of time, and the wrong “bar fight” to be having.
- Accept that between each of the “state stacks” – you have a “semi-permeable membrane”. People who studied biology know what I mean. A “semi-permeable membrane” allows some things through, but stops others (a cell wall is an example of a semi-permeable membrane). I’ve drawn this as arrows – some that pass through, some that do not What is allowed through universally: APIs and Data standards (drawn at the top). What is NOT allowed through: any tool, process, or organizational model that is not shared between “state stacks”. Why? Organization models optimize for the WAY things get done. Org models manifest into process which in turn manifest into tools. If you see processes, tools, org models spanning the “state stacks”, I don’t see that succeeding often.
- Hint: if you are picking your change management tool (BMC Remedy) because it fits with your process (ITIL) because it fits with your org (silos with SLAs between them), and your technology in that “state stack” (Mainframe and traditional open systems/CI) – it’s NEVER going to be the right choice for another “state”, even though it MAY be the right choice for your legacy. I’m finding this is one of the hardest things for IT teams to embrace – they are so very used to creating horizontal processes, tools, and structures that span the whole enterprise.
- Embrace that the “highest value” move you can make is if you can move work (apps and associated data) from legacy approaches towards the right, moving them towards the more modern approaches, and the highest order abstractions you can. If you have a dollar to spend – moving work from left to right has the highest first, second and third order benefits. It’s not easy, but it can be done. It can be done for mainframe apps and data. It can be done for legacy 3-tier apps and application platforms/app servers. It’s not easy, but it can have a big payoff. The flipside? If you’re NOT going to re-design the app itself, don’t fight the fact that it’s living in a “legacy state stack”. You can optimize in that “state stack” (see the brute force all-flash array transition, or the move towards HCI and software-defined stacks underpinning traditional apps) with great effect.
- Get company-wide religion on the only thing that MUST span the different “state” stacks – APIs and Data. These are the things that must be able to span, because after all, it’s almost NEVER the case that a business process or application lives only in one of the “state” stacks – and therefore you need to be able to count on ways to communicate across them, without binding them (with tools, processes or organizational bindings) – because the way they operate within the “state” stack is so different. They operate on different time scales and cadences. They operate on different cultures. They all need two things from other “state stacks”: APIs and Data.
Perhaps the biggest “ground rule” is the leaders that are pushing through change have to challenge their people, their long-standing “this is the way we’ve always done it”.
I’m going to pick one example to double-click. Let’s look at the networking layer.
In the “Mainframe/Legacy”, “traditional x86 workload” and the “Virtualized x86” traditional multi-tier virtualizable app drives a model where the networking technology, tools, processes and people/culture are commonly VERY rigid, slow, and defined by SLAs/processes – not by APIs.
Put in basic terms – the way you comply with isolated security domains are physical network ports, physical firewalls, and manual VLAN configuration – all handled through and ITIL-modeled change management process. That approach works GREAT for that “state” stack.
Conversely, that approach STARTS to break down with “Cloud Ready” highly automated IaaS – simply due to speed and complexity of automation.
That approach TOTALLY breaks down for a modern “Cloud-native” state stack and the associated app built on a series of micro-services with a CI/CD pipeline that automates build, test, deploy multiple times a day for any one of a set of micro-services.
In that case, the network along with the security domains must be implemented in software, and be completely programmable.
What happens if you don’t do this – but stick with the “way things have worked”? Answer = fail.
I see this over and over again. At many customers, the networking team is somewhat (strangely) isolated from the rest of IT. Sadly (not always) they are relatively rigid in the way they operate – because what they’ve done has worked. But then the cloud native app comes along, and the dev team wants to constantly be pushing updates. They are willing to embrace the fact that security is now part of their responsibility not “someone else” – and are willing to comply with network isolation and security domains. What they are NOT able to embrace is a slow, rigid legacy process, and a totally opaque interface to the world of the network. This is why, BTW – the VMware NSX-T being an embedded, integral part of Pivotal Container Service is so important. We didn’t do it “just because”, we did it because there was a real customer need – for the people consuming Kubernetes to treat the network as an essential and programmable part of the domain of container/cluster management. Not only for multi-tenancy (now), but also for rich micro-segmentation (soon).
I’ve picked one example (networking), but the following question can be generalized and directed to any part of the stack: Q: “Can multiple “state stacks” be handled by a singular team?”
Answer in my experience = possibly… but only if they embrace the fact that the totally different operating model means no processes, no tools – only APIs, data models “transit” the two “state stacks” by definition.
In my experience, this rarely works, BTW – because processes manifest into tools and organizations.
Perhaps the more valuable “bar fight” to be having is (again, in my opinion!) is about how to navigate these changes, how to guide people to completely challenge what has worked so well for them.
In the example of networking (and you could do this for every part of the stack all the way up to the app layer itself) – if your networking team isn’t willing to embrace totally doing networking in a new way… in effect “partitioning their brains” they are NOT going to be able to span the domains.
It doesn’t mean they are bad – but rather, as a leader, you need to applaud them for what they do so well in one “state” stack, and furiously STOP them from letting their approaches “infect” the adjacent stacks, and start to think about how you will handle the work in the adjacent “state stack”. Again, using networking as an example, maybe it’s a case of the traditional networking team being responsible for a relatively flat L2/L3 network for the new domain, with fences at the perimeter, but tasking the people responsible for the Kubernetes layer with compliance and attestation of network/security compliance (the work = the “what”) – just don’t force them to do it the same WAY (the “how”).
Driving people/process change is much harder challenge than any one tool, any one technology – but THAT is a bar fight worth having!