Artificial Intelligence (AI) and Science

Artificial Inelegance (AI) seems to be in the news with promises of a tremendous amount of benefits for the average person.  Specifically, “Generative AI” will be the foundation of all these benefits.   Before we start believing all the super benefits from Generative AI programs like ChatGPT, it is instrumental to understand the background behind these newer programs.

In the early 1980s, personal computers started becoming a tool that was adopted by large businesses.  The development of spreadsheet programs, like SuperCalc, provided the ability to analyze large amounts of data that would have required many weeks of effort from many people.  It was simple to create a database so that one that could store, modify, and analyze data in vast amounts that were not possible previously. 

The next step was the ability to create a program with self-learning, an expert system.  The earliest ones were simple.  One could develop a system that evaluated results and then make predictions on what the possible cause was or provide guidance for a future outcome.  One of the first ones, the author developed was an analysis of quality failures from a manufacturing facility.  There was a history of failures in the final testing and related causes that had been previously identified.  Once the system was in place, future identified failures would be identified along with the cause and that data entered into the database.  Over time, the ability to predict the cause of failures would become more accurate due to the additional data being entered.  This type of expert system has a built-in trap! 

What is the trap?  When the data is being collected from one location it will be applicable to that specific location and may not even apply to a different location.  As an example, consider a set of identical twins.  If one goes to a high school that focuses on science and technology, and the other goes to a high school that focuses on artistic talents, like acting or music, the twins will have different capabilities when they graduate.  The area that the learning occurs in impacts the end result.  The trap is assuming that the expert system can apply across everything, which it is specifically focused.

In the latter part of the 1990s, Text Mining was developed to analyze and correlate relations among documents based on occurrence of text phrases.  Based on the phrases, it was possible to develop the frequency of occurrences between other items in the documents.  Based on the identified frequencies, it is possible to predict correlations among items identified in the original text analysis.  This provided the program builders with a means of pulling specific content from existing documents.

While most people don’t realize it, ChatBots have been around for more than ten years.  What is a ChatBot?  It is a computer program that can simulate a conversation with a person.  These started as simple question and answer communications with the ability to either speak your answer of enter  something from a keyboard.  As additional decision making was added (i.e., artificial intelligence) to assist the person through a selection of choices.  “The latest evolution of AI chatbots, often referred to as “intelligent virtual assistants” or “virtual agents,” can not only understand free-flowing conversation through use of sophisticated language models, but even automate relevant tasks.” [Ref. 1]

This brings us to the latest efforts that are known as Generative AI, which can pull from a vast amount of date to produce text, images, videos, etc., that appears to be original concepts.  Yes, the information that is provide may appear to be novel, but it is based on a collection of existing data and an algorithm(s) that control how the computer directs the accumulation of data and in what manner the results are presented.   There is a concern that the control of the algorithms provides the ability of what type of results will be provided.  An article in the November 30, 2023 issue of the Wall Street Journal provides an argument for these algorithms to be open source and available to all. [Ref. 2]

That bring us to the title of this blog.  If one considers the computational power available, the analysis of multiple combinations of molecules based on a predetermined set of characteristics can be employed to eliminate a lot of possible combination and provide some strong suggestions for researchers to evaluate.  The program is building on historical data and algorithms to do its specific analysis.  The same type of effort can be applied to novel combinations of materials/elements.  With whatever guidelines are incorporated in the algorithms, the results can provide novel materials.  Some would say the computer “created” the new drugs, materials, or whatever.  In reality, the results were created by the people who created the algorithms – human input and human direction.  This raises an interesting question.  Are the people who created the algorithms the real owners of the “discoveries”?  Something for the courts to decide in the future.

References:

  1. https://www.ibm.com/topics/chatbots#What+is+a+chatbot%3F
  2. “AI Needs Open-Source Models to Reach Its Potential” Wall Street Journal Thursday, 11/30/2023 Page .A017

About Walt

I have been involved in various aspects of nanotechnology since the late 1970s. My interest in promoting nano-safety began in 2006 and produced a white paper in 2007 explaining the four pillars of nano-safety. I am a technology futurist and is currently focused on nanoelectronics, single digit nanomaterials, and 3D printing at the nanoscale. My experience includes three startups, two of which I founded, 13 years at SEMATECH, where I was a Senior Fellow of the technical staff when I left, and 12 years at General Electric with nine of them on corporate staff. I have a Ph.D. from the University of Texas at Austin, an MBA from James Madison University, and a B.S. in Physics from the Illinois Institute of Technology.
Science

Leave a Reply