Practical Computing: Moving Beyond AI Hype for Real-world Utility


Navigating the Evolution of AI: A Personal Journey

I have a longstanding relationship with AI, dating back to the 1980s when expert systems were in vogue. However, I steered clear of the AI winter by venturing into formal verification before ultimately finding my niche in networking in 1988.

Much like my colleague Larry Peterson, who treasures classics like the Pascal manual, I still hold onto a couple of AI books from the Eighties, including P. H. Winston’s “Artificial Intelligence” from 1984. Revisiting this book is a trip down memory lane, as its content remains surprisingly relevant today.

Winston’s insights shed light on the evolving landscape of AI, noting its integration into undergraduate computer science curricula and its regular coverage in reputable news magazines. However, defining AI proves to be a challenge, with Winston’s definition revolving around enabling computers to exhibit intelligence. Despite its circular nature, Winston outlines two primary goals of AI: making computers more useful and unraveling the principles behind intelligence.

The debate over the definition of AI persists, with some advocating for the term Artificial General Intelligence (AGI) to distinguish it from statistical models marketed as AI. However, AI has always encompassed a broad spectrum of computing techniques, many of which fall short of human-like intelligence.

In recent years, neural networks, once popular in the late 1980s, have made a resurgence, with deep learning revolutionizing fields like image recognition. However, the terminology surrounding AI systems, such as “deep learning,” can be misleading. While these networks improve with more training data, their learning process differs significantly from human cognition.

For instance, AlphaGo’s defeat by a human opponent employing an unconventional strategy highlights the inability of AI systems to adapt to novel situations. This disconnect between AI learning and human learning underscores the importance of language in shaping our perception of AI systems.

Despite recent skepticism and failures in AI, it’s crucial not to overlook its positive impact. Machine learning, a subset of AI, offers practical solutions to real-world problems, particularly in networking. From denial-of-service prevention to malware detection, machine learning algorithms play a vital role in addressing various networking challenges.

One noteworthy application of AI is its use by Network Rail in the UK to manage vegetation along railway lines through image recognition technology. While perhaps not as flashy as other AI advancements, it exemplifies the practical utility of AI in solving tangible problems.

In light of recent AI hype and criticism, I advocate for a nuanced approach, preferring the term “machine learning” when appropriate. By focusing on “making computers useful,” we can harness the potential of AI to address real-world challenges effectively.