Simplified IT Infrastructure– A Little Detour into Application Development

April 27, 2021

By John Duffy, CPP Chief Technologist

In my previous blog, I discussed IT infrastructure and how today, companies want all their infrastructure to have the same architecture, be managed from anywhere using the same tools, have the same security (both protection from bad actors and acts of nature, and user access and controls), have the same (theoretically simple) networking, and so on.  Rented infrastructure and owned infrastructure have to play nicely together.

Infrastructure is, from one perspective, a container for applications.

If infrastructure has to be simplified, then app-dev and app-prod (to coin a word) have to be simplified and play nicely together also.  And just like with infrastructure, this has not been the case in the past.  But today, there appears to be a strong drive towards simplification in application development and applications in production.

In the past, a computer company provided server hardware, server operating system, and programming languages.  The OS and the languages were tied to the hardware.  Apps that were developed on Company 1’s “OS-and-dev-language” could not run on Company 2’s “OS and language” without modifications.  Management tools and security tools were also focused on the underlying OS and hardware.  Then Industry groups arose which standardized languages across different OS’s. This helped portability, but was not a perfect world.  Over time, certain OS’s (Windows and Linux) became “standard platforms,” and most development languages were built to run on those “standard platforms.” There were still proprietary aspects of the HW-and-SW pairing, but significantly fewer such aspects.  Some of the original applications (e.g., closed book of business applications in the insurance industry) are still running on code that was developed 50 and 40 years ago.  (If it works, no need to fix it.)

Users, AppDev teams, and Infrastructure teams worked out processes by which organizations were able to develop applications to meet business needs.  Those processes historically involved 2-3-4 years of time between the start of an appdev and that app going into production.  From a business perspective, that length of time caused a business to be behind its competition.  The business demanded much shorter lead times.

The “non-proprietary” appdev world (aka “open source”) was able to meet that requirement. The proprietary world could not.

Meeting that requirement involved de-coupling (as much as possible) the operating system environment from the OEM hardware, and changing from a “monolithic” approach to application development to the “micro-services” approach to development, changing from “years for lots of functions in one big package” to “hours or days for one function at a time.”  It also involved having management applications, monitoring applications, security, networking capabilities, etc., all working in the virtualized world.

Today, almost all functions that were accomplished by hardware have been virtualized.  Infrastructure companies (OEMs) in the past had differentiated themselves by offering “faster, better, cheaper” hardware.  Today, hardware can be commoditized.  Today, critical component companies (chips, circuit boards, disks, etc) are big players for commodity hardware infrastructure solutions.  (Think “Supermicro”). Virtualization software companies (storage, compute, networking, security) are big players in the new software and application world.  The traditional world (OEM hardware running applications) and the new world (everything virtualized on commodity hardware) exist side by side in most data centers.  We are in transition from old to new.

Traditional OEMs developed expertise over 50 years at the global (federated) management, networking, and security levels.  These OEM companies are extending their “global” or “federated” approaches from onprem to cloud. That is their expertise. The companies that built virtualized solutions for networking, security, monitoring and management in the open source world are beginning to extend their scope to onprem hardware and OS.

There is competition.

The purpose of all this activity is to help businesses react faster to changing business requirements.

The new application development approach (so-called ‘micro services’, or containerized world) actually began in the proprietary world (Sun Microsystems) but gained traction in the virtualized open source world.  Now companies like VMware are providing such “micro services” under the umbrella of “Virtual World Generation #1”, or VMware.  VMWare has integrated Kubernetes to run under the VMware software.  Hardware management is passed through from Kubernetes to VMware.

For the next five to ten years, we can expect both worlds to co-exist.  At some point, the adoption rate of “virtualizing everything” will become so prevalent that the old world will begin to fade away.

Our choice is to manage that transition gracefully OR crash and burn.

In my next article, I will talk about managing infrastructure and applications (as if they are different in the future!)