AI Everywhere

By Robert Hult | November 14, 2023

From high schools where students use artificial intelligence to complete assignments to chatbots that report the status of your lost luggage, AI is everywhere.

AI seems like a new phenomenon, but this technology has been in development for 70 years. The term artificial intelligence was coined as early as 1950 and has been a subject of research ever since. Leading technology innovators recognized the potential of AI and have been quietly developing the algorithms and computing hardware necessary to bring it to practical reality.

The introduction of AI has been compared with major technological advances on the scale of the industrial revolution. There is little doubt that AI and its many subcategories will become a significant technology interrupter that will result in huge societal shifts. Some industry leaders have called for creation of controls to manage the beast, but other voices caution against stifling innovation. It is probably too early to accurately forecast the extent of change AI will bring, but given the nearly daily announcements of AI advances and extensions, we will likely get answers sooner rather than later.

The launch of ChatGPT by OpenAI in November 2022 ignited the current flood of public interest and it was soon followed by announcements of next-generation iterations of generative AI  (DaVinci AI, AIVA, DALL_E)  applications and competitors (Google Bard, Microsoft Copilot, etc.). AI systems learn by being fed immense sets of general and application-specific data to make inferences and create new insights, much like a human neural network. Generative AI uses these insights and data to create entirely original information, including text, images, and audio, spanning such diverse industry segments as business, science, government, and entertainment.

Groundbreaking technology like this is typically restricted to university research departments and government agencies. The last major upheaval of information exchange created by the internet began life that way. Initiated in the 1960s, the internet was envisioned as a channel for government researchers to share information. It was not until 1983 that the internet as we know it today became a public utility. The fact that ChatGPT suddenly became accessible free of cost to anyone with access to a computer moved the technology from theory to practice at an unprecedented pace.

AI upgrades are being released at a furious rate. Chat GPT-4, the latest (as of this publication) version, is said to exhibit “human-level performance” and is designed to generate natural language interactions between humans and machines.  Employees who provide live customer support have reason to be concerned.

A key feature of AI is its adaptability.

AI-enabled systems can offer general information on a broad range of topics or focus on specific tasks in fields such as medical, finances, education, or drug discovery. Rather than employing a financial consultant who tries to pick rising stocks based on the performance of the market over the past six months, an AI system could make recommendations on the direction of individual stocks based on a review of every market transaction that occurred over the past 50 years.

Computers using AI have already been used to create music, artwork, and computer coding. Its accuracy is dependent on the amount and quality of data consumed in its training phase.  Results also reflect any intentional or accidental bias or omissions in its training data set. Current implementations of AI are subject to occasional “hallucinations” where the system randomly creates wildly inaccurate information. Parts of this article used ChatGPT for secondary verification of information. To a large degree, the results were basic and accurate, but useless for inquiries of anything that occurred after September 2021, the cutoff date of its training data. It knew practically nothing about who is running in the next presidential election or the growth of co-packaged optics.

AI requires advanced system designs

The race for AI supremacy has begun, driven in part by fear of being left behind. Investment in AI software and hardware has increased dramatically. At the same time, suppliers of high-performance computing and network interconnects will see excellent opportunities for long-term sales growth.

Efficient AI computing system design requires at least four basic characteristics:

  1. High bandwidth
  2. Low latency
  3. High availability
  4. Reduced power consumption

Increased quality of response along with a huge rise in the size of models is putting pressure on computer designers to build systems capable of supporting these demands.

The integration of specialized accelerators that speed up the processing rate of specific computational tasks is well underway. Advanced accelerators from suppliers such as Nvidia and Intel have spiked from a few hundred per machine to tens of thousands. In addition to supplying state-of-the-art components, Nvidia is building its own supercomputer which will network thousands of “super chips” and hundreds of terabytes of memory.

The volume and speed of interconnects raises the potential for serious data bottlenecks. Connectivity between these many components is one of the factors that is driving adoption of fiber optic links. Optical switches will optimize bandwidth, reduce latency, and minimize machine learning time from months to weeks. Optical fiber-to-chip and intra-chip interconnects will likely be required to support next generation supercomputer performance which will be measured in exaflops.

Not everyone is excited about the AI revolution we are just now entering. More than 1,100 experts signed an open letter expressing concern that the technology is advancing too fast, and that unanticipated consequences may result in serious detrimental effects. Questions such as, should AI be restricted to replacing unskilled menial jobs or automate all jobs? As AI-enabled devices continue to advance and possibly become self-aware, will they begin to make decisions that benefit themselves rather than their human creators? A six-month pause could be used to identify hazards and develop a series of safety protocols to mitigate them. It is unclear if this process has been widely implemented.

The genie is out of the bottle, and the best we can hope for is that it will ultimately benefit humanity and controls will keep this powerful technology out of the hands of those who intend deception and harm. One pundit suggested that AI will not take your job, but someone using AI will. In the short-term, AI enabled systems will likely upend many careers and those displaced may not have the necessary skills to assume newly created positions. Few horse farriers became auto mechanics when the horse and buggy era ended. In the long-term, AI is likely to generate many new job opportunities in industries not even imagined today.

Read Bob Hult’s Tech Trends series to learn more about the evolution of computing.

Like this article? Check out our other Artificial Intelligence, High-Speed articles, our Datacom Market Page, and our 2023 Article Archive. 

Subscribe to our weekly e-newsletters, follow us on LinkedIn, Twitter, and Facebook, and check out our eBook archives for more applicable, expert-informed connectivity content.

Robert Hult
Latest posts by Robert Hult (see all)
Get the Latest News
x