Across four decades, I have worked as a systems engineer in the information technology (IT) industry designing, architecting, configuring computing systems and representing them to buyers and operations teams.
I’ve learned to see it as the art of designing IT solutions that amplify human productivity, capability, and creativity. For these aspirations to be realized however, these solutions need to be reframed and translated into business value for acquisition and implementation.
It’s a tricky proposition in this hypercompetitive world, which we’re seeing unfold in front of our eyes due to the current buzz around AI and Large Language Models (LLMs). The ‘arrival’ of AI onto the scene is really the delivery of the promise and aspirations of six decades of iterative effort.
However, its success – defined in terms of business value – is not a given. To understand this, let me first take you back to a technical article I came across early on in my career. “All machines are amplifiers,” it stated in a simple and direct manner. That statement was an epiphany for me. I’d considered amplifiers as just a unit in a stereo system stack or what you plugged your guitar into.
Mind blown.
As I have pondered this realization across my career, I have come to consider IT as a collection of machines offering similar amplification, albeit on a much broader scale and with greater reach.
IT amplifies human productivity, capability, and creativity. It allows us to do things we could never do before and do them better and faster. It helps us solve complex problems and create new opportunities – for business and humanity.
To augment or to replace – THAT was the question
However, amplification is not an end in itself. In the 1960s, two government-funded research labs on opposite sides of the University of Berkeley Stanford campus pursued fundamentally different philosophies. One believed that powerful computing machines could substantially increase the power of the human mind. The other wanted to create a simulated human intelligence.
These efforts are documented in John Markoff’s book, “What The Dormouse Said – How the Sixties Counterculture Shaped the Personal Computer Industry”.
One group worked to augment the human mind, the other to replace it. Whilst these two purposes, or models, are still relevant to computing today, augmenting the human mind proved to be the easier of the two to deliver – with a series of miniaturization steps culminating in the general consumer availability of the personal computer (PC) in the 1980s. PCs freed humans to be individually productive and creative, and changed how education and business were done around the globe. Humanity rocketed forward and has not looked back since.
Artificial Intelligence (AI) is now becoming commercially viable and available at our fingertips to replace the human mind. It is maturing rapidly, being implemented at breakneck speeds in multiple domains, and will revolutionize how computing is designed and deployed in every aspect from this point forward. While it came to fruition later than its 1960s sibling, its impact will be no less revolutionary with, perhaps, an end-state of intelligence that can operate itself.
Meanwhile, automation on the augmentation front has also rapidly advanced, enabling higher productivity and efficiencies for humans. It’s still a human world, but our cycles continue to be freed up for whatever purpose we can imagine or aspire to, be they business or personal endeavors.
Systems engineering – finding a path between trade-offs
From a high-level fundamental compute standpoint, that’s all there really is – augment or replace. Both models must be the starting point of any system we design. To deliver on the goal, we turn to systems engineering and design at a more detailed, complex, and nuanced level.
The primary task has always been simple in concept – to move bits (bytes) of data into the processor registers where it can be operated upon. That is, get data as close to the processor as possible and keep it there for as long as practical.
In practice this can be a surprisingly difficult and expensive proposition with a plethora of trade-offs. There are always trade-offs in IT. You can’t have it all. Even if it were technically feasible and attainable you couldn’t afford it or certainly would not want to in almost every case.
To accommodate this dilemma, at the lower levels of the stack, we’ve created a chain of different levels of various data storage and communications designed to feed our processors in as efficient and effective a manner as practical, enabling them to do the ‘work’ we request of them.
For me, then, designing and engineering for purpose and fit is, in essence, simple. Firstly, am I solving for augmentation or replacement? Secondly, where’s the data, and how can I get it where it needs to be processed, governed, managed, and curated effectively?
And one does not simply store, retrieve, manage, protect, move, or curate data. That stuff explodes in volume, variety, and velocity, as we are wont to say in this industry. These quantities are growing exponentially. Nor can we prune or curate it effectively, if at all, even if we wanted to.
Applying principles to the business value of AI
All of which brings us back to the AI’s arrival on the scene. The potential for AI is huge, as we are seeing. From the systems engineer’s perspective however, AI requires a complete data set to enable the expected richness and depth of the response. If the dataset is incomplete, ipso facto, so is the response – and, thus, it could be viewed as bordering on useless in many instances. In addition AI algorithms can be exhaustive (and processor-intensive) or take advantage of trade-offs.
This opens up a target-rich environment of problems for clever computer scientists and systems engineers to solve, and therein lies the possibilities, trade-offs, and associated costs that drive all decisions to be made and problems to be solved at every level of the architecture – user, application, algorithm, data, or infrastructure and communications.
AI has certainly ‘arrived’, although for the systems engineer, it’s more a continuation of a theme, or evolution, than something completely new. As the PC in the 1980s was the inflection point for the revolution of the augmentation case, so too is AI in the 2020s for the replacement case.
It then follows, how are we to effectively leverage AI? We will need the right resources and capabilities in place (people, skills, tools, tech, money, et al) and the ability within the business to use the outputs it generates. It resolves to business maturity, operational models and transformational strategies.
Right now I see three things as lacking. From the provider perspective, AI platforms (and related data management) are still limited which means a substantial amount of DIY to get value out of them. I’m not talking about ChatGPT in itself, but, for example, how it integrates with other systems and data sets. Do you have the knowledge you need to bring AI into your architecture?
Second, operational models are not geared up to do AI with ease. AI doesn’t work out of the box beyond off-the-shelf models, however powerful they are. Data scientists, model engineers, data engineers, programmers, and operations staff all need to be in place and skilled up. Have you reviewed your resourcing and maturity levels?
Finally, and most importantly, is your organization geared up to benefit from AI? Suppose you learn a fantastic insight about your customers (such as the example of vegetarians being more likely to arrive at their flights on time), or you find out when and how your machinery will fail. Are you able to react accordingly as a business?
If the answer to any of these questions is lacking, then you can see an immediate source of inertia that will undermine business value or prevent it altogether.
In thinking about AI, perhaps don’t think about AI… think about your organization’s ability to change and unlock AI’s value to your business.