Cloud-Sliver

Blog

True Business Transformation

Requires A Material Change

Using Commodity Hardware

A Software Supercomputer

As more technologists agree traditional IT computing stacks reached end-of-life, there is more interest in delivering a “super computer” that can process data and apps thousands of times faster.

Some call this hyper converged infrastructure.  Others talk about quantum computing.  These are generally hardware choices offering faster processing with smaller footprints.

It seems few were looking at the radical transformation of the existing software stack as the delivery vehicle for disruptive outcomes.

This is the world of System Oriented Programming.

Let’s define disruptive and then, outcomes. Disruptive means at a minimum movement of a decimal point or two – or seven.  Think 10 times or 100 times or even a million times faster, cheaper, slimmer, less energy.

Outcomes are the currency of change so let’s pick a few most can agree would be a good result

  • Current applications are made to run 1,000 to 1 million times faster
  • Storage is reduced 90%
  • Data centers shrink 90% or more measured by footprint and power consumption
  • Apps one expects to take 36 months to build are delivered, in production in a quarter
  • Cloud costs reduced 50% – 90%
  • Oracle, VMware and other software licenses are not needed
  • Overall IT spend reduced 50% while delivering many more new apps per month
  • Such outcomes are delivered risk free, at a fraction of the cost of current technology

Those are pretty attractive outcomes and ones most can agree are not achievable with current technology and would be great if they were.

This is disruption across virtually every aspect of modern compute – not just fast hardware – and from where does it come?

Surprising disruption comes from innovating how large software systems are designed, implemented, and deployed.

A new class of programming – “System Oriented Programming” enables these disruptive outcomes.

System Oriented Programming innovates simultaneously across:

  • Distributed processing,
  • Database architecture,
  • Stream processing,
  • Object oriented programming,
  • Micro-services architecture,
  • Full stack development frameworks (at macro and micro scale), and
  • Compiler design.

Engineers trained in System Oriented Programming become “full system developers” who, as individual developers, replace scores of engineers in a traditional enterprise class software deployment.

Software systems that once took 18 months to 36 months to build with large teams, can now be delivered in a single business quarter, with a few full system developers, at the fraction of the cost.

One of the larger billing systems in the United States is for a Fortune 10 firm, with over 50 million customers, multiple classes of service, and scores of line items for each customer.

This billing system today runs across a collection of data centers so large that, if you put them together, you need a golf cart to traverse it.

Just the billing system costs over $1 billion yearly to operate.

Yet a larger billing system, processing over twice as many customers and service classes runs on this collection of network cables and commodity hardware, in a lab in Austin, Texas – built with, you guessed it, System Oriented Programming.

Instead of a billing cycle taking 25 days to compute, it calculates all bills in a couple of minutes.

Instead of needing a golf cart, the distance across the data center is measured with a yardstick.

Instead of paying an army of engineers hundreds of millions of dollars a year to maintain, modify and build new billing apps, this same work is done with a single engineer.

Instead of burning the electric energy needed to power a small town, the power consumption is less than that needed for a Tesla.

Instead of paying Amazon or Microsoft $700 million a year for cloud services in their data centers, the cloud services bill to run this system is close to zero.

Here is an example of what super computing can deliver, today.

Saving big money is always an eye catcher. But hidden in a benefit stack is what you can do with such a technology.

This behemoth billing system now operates without a data center.  Think that through for a minute.  One of the world’s largest billing systems – no data center.  Bills can be delivered, in real time on a phone or edge device. 

A customer can see their bill, for scores of different services, on a phone, the moment they appear.

Super computing is the future but it may not be coming from the corner everyone is watching.

System Oriented Programming

No Need For Data Centers

For a generation we have lived with Moore’s Law: the speed and capability of computers doubles every two years while their cost is cut in half.

There has not, however, been a Moore’s Law in software.

Actually, quite the opposite has happened. Software apps tend to accumulate more functions, used by fewer people until nobody understands why they were created.

To organize increasingly complex apps, layers of software categories emerged for key functions:  middleware, RDBMS, virtual machines, security, graphical interfaces. Each layer created a level of abstraction with its accompanying transaction overhead.

Apps become slower, more expensive to maintain, and business users sometimes waited weeks for even a simple request to be fulfilled.

Management eventually threw up its hands: “It can’t be this screwed up!”  “Do something!”

The Cloud Era represented replacement of the data center, or parts of it, with someone else’s data center. The cloud data center runs pretty much the same tools, the same software as the internal IT shop.

Apps that took days to run in the data center took just as many days in the cloud. Building new apps in the cloud, at best, took 20 percent less time. Costs did not appreciably change. Most of all, nobody gained a significant competitive advantage because the cloud is not a new technology.

The cloud is the same technology in a different place.

Business models are becoming the ultimate competitive weapon.

New business models require very different cost and performance capabilities than today’s data centers (whether they are in the cloud or on-premise) can provide.

New business model innovators have begun experimenting with System Oriented Programming.

System Oriented Programming is the next step beyond microservices and containers.

System Oriented Programming enables entirely new outcomes because it is not limited by the current, obsolete tech stack.

Early System Oriented Programming applications produced previously unobtainable outcomes.

For example, a legacy billing app that ran for 93 hours in a data center now runs in less than a minute on a $2,000 commodity hardware platform.

The common theme is speed and dramatic cost reduction without a data center. How can this be?

System Oriented Programming delivers a different technology stack than the current Oracle/VMware/middleware stack. A new stack eliminates most I/O wait states, the bane of current technology.

System Oriented Programming is by nature massively distributed. The collective power of an inexpensive network of computing devices acts as the data center. The IT data center, consuming between 2 percent - 5 percent of America’s energy can eventually be replaced. Real GREEN progress occurs.

System Oriented Programming delivers small light-weight apps at transaction points in a distributed system to dramatically reduce I/O wait states. Orders of magnitude reduction in I/O wait states are possible when replacing large enterprise applications.

System Oriented Programming employs data pipeline processing to further reduce I/O wait states.

Using application-specific databases and abstraction models, System Oriented Programming reduces abstraction layers in a system. This simplification removes the I/O wait states associated with translation from one representation to another in complex systems.

System Oriented Programming takes advantage of the repetitive nature of most core business processes and moves the compute and storage associated with repetitive tasks to be co-located with the task. This can result in immense reductions in system I/O wait states.

These system-wide reductions in I/O wait states are what delivers performance that can be 1,000 – 1,000,000 times faster than data center (or “cloud”) applications.

System Oriented Programming applications have been in production for 6 years at firms most of you know.

There is a growing constituency of innovators and early adopters who need to deliver disruptive business models and System Oriented Programming is the technology stack that makes it both possible and practical to deploy these business models.

One of the largest potential disruptions enabled by System Oriented Programming is the eventual elimination of the corporate data center. New compute models like EDGE, without a centralized data center, are enabled.

As one executive who implemented a System Oriented Programming for real-time data analysis recently said:

“With System Oriented Programming, we concluded there is not an application we have or can envision that really needs a data center.”

As the System Oriented Programming compute stack continues its entry into the executive toolset, the days of the legacy data center and its associated costs, may well be coming to an end.

Deliver Apps at 10x The Speed

Locality of Logic

System Oriented Programming is the next evolution of microservices and distributed computing. It shortens the development time of enterprise class applications to a single business quarter.

System Oriented Programming enables application designers and implementors to make system-wide optimization for locality – Locality of Reference and Locality of Logic.

Locality of Reference is more commonly recognized. It refers to optimizing a distributed system so that within each running process, the process needs to only access data stored locally. By avoiding network requests in the middle of computation loops – the overall speed of the distributed system can by improved significantly. These performance improvements can be multiple decimal orders of magnitude.

Locality of Logic is a term fewer people are familiar with – however, its impact on app development is significant.

A "typical" non-trivial enterprise application may have a few hundred different relational database tables. These tables, and the interrelationships among them, are a tangle of object encapsulations resulting in complex application code layers.

No human can simultaneously understand hundreds of data tables and the permutations of their interrelationships. Thus, today’s major corporate applications take years to build and are maintenance nightmares.

Locality of Logic, in System Oriented Programming takes a different approach.

Locality of Logic is usually implemented in object encapsulation (object oriented programming). Here a programmer only needs to understand the inner workings of an object in the code that implements the object. When used as a building block in a larger application, programmers need only understand how the objects works “from the outside” rather than its internal implementation.

Object encapsulation benefits do not translate equally well to the system level.

In System Oriented Programming, both through the software run-time environment "plumbing" as well as design and implementation methodologies that are encouraged/enforced by the programming framework -- the design of enterprise class applications is done with a vastly reduced number of database schemes.

In System Oriented Programming, these encapsulations fall from 100's to typically fewer than 10.

The first order result is SIMPLICITY.

Simplicity brings derivative benefits. One is simplified application code.

System Oriented Programming dictates application building block functionality/logic is implemented locally in a few schemes that would be referred to as "stored procedures" in the relational database world.

These “stored procedures” have Locality of Logic; the application logic is co-located with the storage scheme definition and typically requires knowledge of only that scheme or just one or two additional.

While Locality of Reference simplifies by reducing I/O thrashing – Locality of Logic requires simple knowledge of a database scheme instead of knowledge of an intractable number of potential interactions among relational data tables.

Optimizing for Locality of Logic presents multiple data and compute models to the application developer. She can use the data model that most “naturally” fits the part of the application the developer is working on.

Needless complexity in traditional enterprise applications comes from forcing app data into a relational data storage model that may not be the most efficient way to represent the data.

When multiple data models easily co-exist, the application code becomes more intuitive, expressive, and smaller. Application logic building blocks can be embedded in the database scheme, not forced into layers built for computation into a database.

Such simplicity delivered by Locality of Logic makes application layer code significantly easier/faster to write. Empirical results show about a 10x time reduction to develop complex applications.

IT departments can deliver apps the user actually wants rather than apps that force the user to adapt her business to the application’s constraints.

Locality of Reference, which minimizes I/O wait states, can enable each server to do the work of 10 conventional servers.

Locality of Logic, with its simplicity and logic elevated to the system level, can enable each programmer to do the work of 10 conventional programmers who are forced to wade through jungles of hundreds of tables of interaction complexity.

The simultaneous optimization for both Locality of Reference and Locality of Logic can produce a 10-fold increase in resources for corporate IT.

Ten-fold increases in productivity and speed mean digital transformation may finally be a reality instead of a wishful dream.

"App Specific Optimizations

Software ASIC

System Oriented Programming

Empowering The Edge

Computing at the edge replicates your data centers (albeit in a smaller form factor).

Companies adopt this strategy in order to process closer to the customer and reduce network latency.  While these “edge” data centers may be physically smaller than centralized data centers, they rely on the same complex and expensive software infrastructure as large data centers.

If one replicates data centers, costs replicate as well.  These data centers at the network edge use VMware, Oracle, security products, and all of the other software infrastructure products associated with data centers – including all of their associated costs.

Such an implementation requires some connection with a central data center, thus continuing network latency as a constraint.

If one runs the same infrastructure as the main data center or cloud, everything else follows – apps are cumbersome, expensive, hard to build and increasingly difficult to maintain.

The firms using this strategy of computing at the edge by replicating data centers at the edge of the network will not achieve sustainable differentiation from their competitors.

Computing at the network edge is NOT the same as edge computing.

System Oriented Programming enables the edge to use a tech stack purpose-built for edge computing – without the need for a data center (either centralized or at the "network edge").

Placing replicated date centers at the network edge enables a company to run perhaps hundreds of database processes simultaneously.

True edge computing, employing a deeply distributed, System Oriented architecture, runs tens of thousands to tens of millions of database processes simultaneously.  Such edge computing atomizes conventional software infrastructure into a granularity that makes edge computing a “difference of kind.”

This type of edge computing delivers its promise via a distributed architecture that can utilize a heterogeneous hardware mix of servers, embedded devices, and mobile devices such as tablets and phones.

In System Oriented Programming edge computing, the requirement for conventional, legacy data center software infrastructure disappears.

There is no need for Oracle or any other data center commercial DBMS.  Ditto for VMware, conventional data center security products and middleware.

Edge computing can thus deliver real time processing, on virtually any hardware, at the network edge because it is built with an entirely different software technology stack.

The System Oriented Programming edge software stack is massively optimized to eliminate I/O wait states. The delay from hitting the enter key to parsing over 100 million records in a query is imperceptible.

The differences between computing at the edge and System Oriented edge computing are seen in the business outcomes.

If you are computing at the edge, nothing much changes except some network delays are reduced.

However, with System Oriented edge computing, everything changes:

  • Applications run 1,000 to 1 million times faster
  • Apps that used to run in large cloud data centers, now use bare Unix instances that reduce “value add” cloud costs to zero
  • Storage is reduced 80-90% because of the elimination of RDBMS legacy constraints
  • Applications that once took 2-3 years to build from scratch, now take a single business quarter
  • IT costs are reduced 50% while applications are delivered in a fraction of the previous time

While these outcomes are impressive, the real power of edge computing enabled with System Oriented Programming comes from what it enables, not what it eliminates.

Legacy batch systems, with 40 year old patched (or even lost) source code can now be rebuilt in weeks and made to run in real time.

New digital applications can be imagined, built, tested, placed in production and in customer’s hands or on their phones in a single business quarter.

Two large corporations, each with intractable legacy systems, can deliver a real time customer experience, via a partnership, to blend one’s products and the other’s distribution capabilities in real time, in a single business quarter.

The outcomes are quite different. Computing at the edge kicks the can down the road where nothing much changes. System Oriented edge computing delivers customer intimate apps, in real time, at a fraction of legacy world costs.

System Oriented edge computing enables true digital transformation.

Enable Servers To Do 10x The Work

Locality of Reference

Microservices, containers, object oriented programming, and building software from reusable pieces make good sense.

Anything that rapidly enables expensive developers to do more with less, is a good thing.

Now, microservices architectures have opened the door to an even more fundamental compute paradigm shift – MICROSYSTEMS architectures and System Oriented Programming.

Microsystems architectures are the result of simultaneous innovation across the entire modern development stack:

  • Distributed processing,
  • Database architecture,
  • Stream processing,
  • Object oriented programming,
  • Micro-services architecture,
  • Full stack development frameworks (at macro and micro scale), and
  • Compiler design.

Microsystems enable developers to build, test, and deploy a microsystem that is a 100 percent functional equivalent of a major application, on one’s laptop.

Microsystems – not simply microservices!

This is the result of System Oriented Programming.

Let’s go there for a minute.

Now a developer, working from home, can securely build a fully distributed node for a major production system. That node will be one instance of what will become hundreds or thousands of nodes in the running systems. Every one of those nodes behaves like every other node, albeit with different data.

Why is this important?

System Oriented Programming optimizes for “Locality Of Reference.”  For those not computer scientists, that means the data needed for the next CPU instruction is accessed locally and does not have to be fetched remotely (from an Oracle database).

Getting rid of Oracle is the promised land for many CIOs, but the added benefit is creating supercomputer performance on existing hardware.

With locality of reference, application-layer code can run a thousand to a million times faster. In practical terms, every server can now do the work of 10 servers running traditional software. Data centers and cloud resource needs can be 1/10th their previous size.

With System Oriented Programming, major apps that used to take 2 years to develop can now go from concept to production in a single business quarter.

Let’s get back to our developer friend, working quietly and happily from her home.

She is building the entire enterprise application.

Every system node behaves like every other and plugs into the System Oriented Run-Time Environment. Once her system node is built, she can automatically bring up thousands of identical nodes, fully distributed, and automatically distribute and load the corporate data into them – all behind the cherished IT firewall.

She also can run this now-massive application in a completely decentralized configuration without a data center at all.

System Oriented Programming is where the compute world is going because it must.

Applications are moving to the customer. They are collecting data across vast networks and disparate locations. Customers need to compute where the data is, and not be forced to send data to centralized locations that concentrate intrusion threats and risks communication delays and failures.

There is no way the world will build mini data centers where the customer exists, thus the need to run super computer speeds on tiny hardware.

This is the world of the fully distributed app and System Oriented Programming is its foundational programming technology.

System Oriented Programming applications approach security fundamentally differently.

Microsystems assume they are operating on a compromised hostile network. Every microsystem instance has full security infrastructure built in. Microsystems never assume they are behind the “safety” of a corporate firewall. This security paradigm is designed for the age in which we find ourselves.

Adopters of System Oriented Programming believe the Black Swans of Coronavirus and societal disruption will continue to visit. More critical resources will work from home and remote locations. Security threats will continue to infect via the misapplication of legacy security to web based, mobile applications.

Today, System Oriented Programming is safely and securely bringing super-computer performance applications to where the data (and the developers) reside – not forcing the data to be transported to where the IT shop might be.

This is a prime benefit from harnessing Locality of Reference for massive speed on small hardware.

Evolving Microservices Into Microsystems

Delivering Parallel Apps

Microservices, containers, and object-oriented programming are delivering some level of benefit in replacing monolithic apps. Unfortunately, that benefit is not enough to move the needle against 70 percent to 90 percent of all digital transformation failures.

To “transform” a monolithic, legacy app into an agile, slender, responsive set of microservices, requires months of evaluation of every subroutine.

While the result of the transformation may be an agile app, the process of getting there is not much different from writing the monolithic app in the first place.

It does not have to be this way.

System Oriented Programming is the next step beyond microservices, containers, and object-oriented programming. Instead of delivering a collection of microservices, it delivers a collection of small, fully containerized, independent micro-apps.

Its disruptive outcomes are possible because the micro apps are freed from the current, legacy underlying tech stack.

System Oriented Programming provides a minimalist software stack with persistent storage (database) at the bottom of the stack, distributed processing middleware in the middle of the stack, and GUI widgets at the top of the stack.

The System Oriented Programming software stack can run anywhere – from large servers to inexpensive hardware at the network edge. The System Oriented Programming software stack minimizes I/O which enables micro apps to run at near silicon speed.

System Oriented Programming technology consists of simultaneous innovations in:

  • Distributed processing,
  • Database architecture,
  • Stream processing,
  • Object oriented programming,
  • Micro-services architecture,
  • Full stack development frameworks (at macro and micro scale), and
  • Compiler design.

Building micro apps, with System Oriented Programming is an entirely different experience from rewriting/recoding a legacy app.

Many legacy apps have evolved into code thorn beds that do not provide what the business needs. Many firms, particularly with their Customer Care and Billing Systems, have adapted their business to the constraints of their legacy apps because the apps do not natively match the firm’s business processes.

Such monolithic apps are so onerous, so dangerous to change, little if any innovation can take place in their boundaries. Just think of it:  the most important customer-touching systems cannot be touched without fear of disaster.

System Oriented Programming offers a very different alternative. A System Oriented Programming micro app can be developed and delivered at full production scale in a single business quarter.

In System Oriented Programming, a single engineer builds a micro app which has the core functionality required for billing, or check processing, or customer care management, or whatever the required business functionality might be. The micro-app is developed as a single node (running on the engineer’s desktop) and then, with the push of a button, is replicated across all nodes in the system and with aggregate access to the full production scale data collection.

Software systems that once took 18 months to 36 months to build with large teams, can now be delivered in a single business quarter, with a few full system developers, at a fraction of the cost.

No one has to touch or modify the fragile, scary legacy Customer Care and Billing System.

There are now two Customer Care and Billing Systems running in parallel. Now comes the fun part.

With two systems running in parallel, you now have two systems independently calculating customer bills. If the systems agree, there is certainty the bill is correct. If they disagree, you have a flagged a problem BEFORE the customer sees the bill.

A parallel Customer Care and Billing System can be the QA system for new billing or customer care features. The new parallel system can become the access point for the major accounts sales team since, because of its processing speed, it has real-time capabilities that are not possible with the legacy system.

The parallel Customer Care and Billing System can run in parallel forever as a QA system – or, after 6 months or a year, with every transaction being tested and reconciled, it can replace the onerous (and expensive!) legacy system.

System Oriented Programming frees corporations from paying the never-ending “budget tax” of Oracle, VMware, and other legacy technologies.

Parallel Apps Require

Continuous Real-Time ETL

System Oriented Programming

Transformation Behind The Corporate Firewall

System Oriented Programming enables entirely new business models from its massively disruptive outcomes. One of those is edge computing, that is edge without the need for a central data center.

Current computing at the network edge is too often about location. The technology, delivered by current hardware vendors, remains the same.

With System Oriented Programming, edge computing becomes an entirely new distributed processing software technology eliminating expensive legacy software taxes like Oracle, VMware, etc. used to develop and deploy large production systems.

With such edge computing there is no need for data centers, in the cloud or on premise.

Companies understand the cloud is not nearly as secure as their internal network. If they forget, the CapitalOne CISO mid-career implosion is there to remind them.

Eliminating the data center does not mean eliminating the corporation’s infrastructure of security, governance, and best practices. For the CIO, eliminating much of the data center needs with System Oriented Programming means dramatically reducing the 66 percent of IT budgets typically spent on maintenance of expensive, and now unnecessary, legacy software such as Oracle, VMware, etc.

For the CIO, eliminating much of the data center via System Oriented Programming means freeing the CIO from being a purchasing agent hamstrung by legacy vendors who do not believe the CIO has other options.

Operating behind the corporate firewall in the midst of the company’s governance and security infrastructure, System Oriented Programming apps deliver transformative benefits to corporate application portfolios.

One early adopter used System Oriented Programming technology to reimplement their billing system as a distributed processing application. The new billing system went from inception to first production release in a single business quarter. The company has been running both the legacy and new billing system in parallel for over a year – using the new System Oriented Programming billing system as a quality check on their legacy system – and has been able to virtually eliminate the 4% billing error rate that they had been struggling with in their legacy system.

System Oriented Programming provides a lightweight software stack that can run on almost any hardware platform (down to the smartphone in your pocket). Each software instance is a self-contained system that stores data, runs application code, requests data from other instances, responds to data and compute requests, and serves up web-based user interfaces both to interactive (human) users as well as other software instances.

System Oriented Programming delivers inherently loosely coupled distributed processing apps with built-in tools and frameworks for scaling and managing a deployed network of compute software instances. For the application developer and the business domain expert, System Oriented Programming enables them to develop and test functionality on small bite-sized, easy to understand, subsets of data and then easily integrate and scale the collection of system building blocks. Prototyping major system features can be done in hours and days instead of weeks and months. Similarly, moving from prototype to production scale occurs in days instead of months.

System Oriented Programming applications are developed by full-stack software developers. In these applications, each software instance is a full-featured “system” containing database technologies, distributed processing middleware, application logic, and full-featured web servers that expose and manage web services API’s and HTML-based user interfaces. Features and capabilities can be developed and tested on a single software instance and then easily deployed throughout the distributed processing network.

System Oriented Programming enables the CIO to finally have the tools their IT organization needs, for fast responsive development, behind the corporate firewall, in a safe, secure, governance-driven environment that delivers the price-performance transformation that “the cloud” has proven unable to provide.

Freeing the corporation from the tyranny of most legacy data center costs via System Oriented Programming means the CIO can, finally, be a true transformation agent for the enterprise.

"More of the Same" Isn't Good Enough

Real Transformation