The GenAI revolution takes us back to the golden age of computing

Mondo Finance Updated on 2024-02-01

In the 2020s, GenAI will be widely used in various computing fields for individuals and businesses. There are many intriguing similarities between the AI revolution and the development of computers in the 90s.

Translated from How The Genai Revolution Reminds Us of 1990s Computing by Janakiram MSV. Generative AI (GenAI) represents a major advance in the development of computing. We've seen how computers have evolved over the past few decades, from mainframe (1980s) to client-server (1990s) to web pages (2000s) and cloud computing and mobile devices (2010s). It has the potential to change the landscape of computers. In the 2020s, we will see generative AI permeate almost every field of computing, both personal and business. The advent of generative AI marks a paradigm shift similar to the technological revolution of the 1990s. Let's take a look at the similarities between these two eras and highlight the trends that are shaping the future of computing.

Once upon a time: The 1990s saw a diversity of CPU architectures, with DEC Alpha, Sun Sparc, and Intel X86 dominating the market.

Now: Amazon, Google, Microsoft, and Nvidia are exemplary GPU and AI accelerator chips.

Similar to the diverse CPU architectures of the 1990s, AI accelerators are becoming the workhorse of GenAI workloads. Amazon's investments in Trainium and Inferentia chips, Google's emphasis on tensor processing units (TPUs), and Microsoft's recent formal entry into the space with Azure Maia are testament to this trend. It is expected that OpenAI will also have a self-developed chip to train the base model. Along with custom AI hardware, the software layer responsible for interacting with it is also evolving. While NVIDIA's CUDA dominates this space, the AWS Neuron SDK, Azure Maia SDK, and Onnx are gaining traction. To take full advantage of custom hardware, deep learning frameworks like TensorFlow, PyTorch, and Jax are optimized for these layers. Customized AI accelerators remind us of the various CPU architectures available in the 1990s.

Once upon a time: Traditional original equipment manufacturers (OEMs) like Compaq, Dell, HP, IBM and Sun are hardware powerhouses.

Now: Public cloud platforms like AWS, Azure, and Google Cloud have become the new OEMs. They play a key role in hosting and deploying AI technologies.

OEMs such as Compaq, HP, IBM, and Sun shipped servers based on specific CPU architectures in the 1990s. In the current context, public cloud providers can be compared to OEMs.

Proprietary hardware, bare metal or virtual servers, and custom software layers are very similar to the end-to-end stacks shipped by some vendors, such as Sun, based on SPARC processors, the Solaris operating system, and other components needed to run workloads.

Once upon a time: Linux and Windows are the core operating systems and are the foundation of computing.

Now: The base model has become the kernel of the AI operating system, some of which are open source and some of which are proprietary.

The debate over open source and commercial software dates back to the 1990s, with the rise of GNU and FOSS. Fast forward to 2024 and we're still talking about the pros and cons of open and closed base models. Llama, led by Meta, and other players such as Mistral are also gaining importance. On the other hand, we have OpenAI's GPT-4, Google's Gemini, AWS's Titan LLM, and a range of other models such as Anthropic's Claude 2, AI21's Jurassic 2, and Cohere's Command. Large language models (LLMs) will become so important that they will be integrated into the OS kernel to provide generative AI capabilities and even the self-healing of the operating system. Once upon a time: The operating system is equipped with the necessary utilities and commands.

NowVector databases, search, and orchestration tools form the backbone of AI utilities, enhancing the functionality and efficiency of AI platforms. Utilities and commands are built into almost all operating systems to manage the system. From basic file management to advanced optimization tools, the operating system will do it all.

Similar to these utilities, a vector database with a retriever and ranking model will be an important part of the AI stack. The new AI stack will take it as a layer, sitting on top of the LLM to influence its response and provide contextual input via prompts. Advanced applications, such as **, will use them to automate a variety of tasks that rely on storage, search, and retrieval. Once upon a time: Command-line interfaces powered by Bash and ZSH, as well as sophisticated graphical interfaces built into Windows and macOS, democratize computing access.

NowAI platforms such as Hugging Face, Azure ML, Amazon Bedrock, and Google Vertex AI become the new "operating system shells" that make AI technology more accessible and user-friendly. The API provided by the AI platform provider provides access to the underlying model in the same way that the shell provides OS access. They provide a simple interface for pre-training, fine-tuning, version control, deploying models, and inference in a new OEM environment (public or private cloud).

The Low and No tools are similar to the GUIs available in Windows and macOS. They democratize AI by enabling non-developers and power users to use the underlying model and build modern applications. Once upon a time: Software applications developed by OEMs or third parties.

Now: AI assistants like Google's Duet AI, Amazon Q, and Microsoft Copilot are new applications that are becoming increasingly important in consumer and business environments.

If the AI platform is the new shell, then the AI assistant is the new application. Similar to the OS provider's built-in applications, which also allow developers to create custom applications, the new platform provides a development environment and tools to build custom AI assistants.

Duet AI integrates with Google Workspace, while Microsoft embeds Copilot into almost all business applications. Amazon Q is tightly integrated with the AWS Management Console to enable users to perform common tasks. Once upon a time: IDEs such as Borland Delphi, Visual Studio, and Eclipse are standards for software development.

Now: Environments such as Microsoft Copilot Studio, Google Generative AI Studio, Amazon Step Functions, and others represent a new generation of development tools tailored for AI and machine learning.

Developers rely on tools like Visual Studio, Eclipse, and Xcode to build custom applications. In the GenAI era, cloud-based tools such as Microsoft Copilot Studio, Google Generative AI Studio, and Amazon Bedrock + AWS Step Functions have become the IDEs of choice for developing AI assistants. They enable developers to integrate disparate data sources, LLMs, prompt engineering, and guardrails to build enterprise-grade AI assistants. The GenAI era is redefining the computing landscape, reflecting the transformative changes of the 1990s, but with a focus on AI and cloud technologies. As a leader, embracing these changes and understanding their impact is critical to driving organizations forward in this new era of computing.

Related Pages