{"id":591,"date":"2023-11-30T12:22:35","date_gmt":"2023-11-30T18:22:35","guid":{"rendered":"http:\/\/www.nano-blog.com\/?p=591"},"modified":"2023-11-30T12:25:16","modified_gmt":"2023-11-30T18:25:16","slug":"artificial-intelligence-ai-and-science","status":"publish","type":"post","link":"http:\/\/www.nano-blog.com\/?p=591","title":{"rendered":"Artificial Intelligence (AI) and Science"},"content":{"rendered":"\n<p>Artificial Inelegance (AI) seems to be in the news with promises of a tremendous amount of benefits for the average person.&nbsp; Specifically, \u201cGenerative AI\u201d will be the foundation of all these benefits.&nbsp;&nbsp; Before we start believing all the super benefits from Generative AI programs like ChatGPT, it is instrumental to understand the background behind these newer programs.<\/p>\n\n\n\n<p>In the early 1980s, personal computers started becoming a tool that was adopted by large businesses.&nbsp; The development of spreadsheet programs, like SuperCalc, provided the ability to analyze large amounts of data that would have required many weeks of effort from many people.&nbsp; It was simple to create a database so that one that could store, modify, and analyze data in vast amounts that were not possible previously.&nbsp;<\/p>\n\n\n\n<p>The next step was the ability to create a program with self-learning, an expert system.&nbsp; The earliest ones were simple.&nbsp; One could develop a system that evaluated results and then make predictions on what the possible cause was or provide guidance for a future outcome.&nbsp; One of the first ones, the author developed was an analysis of quality failures from a manufacturing facility.&nbsp; There was a history of failures in the final testing and related causes that had been previously identified.&nbsp; Once the system was in place, future identified failures would be identified along with the cause and that data entered into the database.&nbsp; Over time, the ability to predict the cause of failures would become more accurate due to the additional data being entered.&nbsp; This type of expert system has a built-in trap!&nbsp;<\/p>\n\n\n\n<p>What is the trap?&nbsp; When the data is being collected from one location it will be applicable to that specific location and may not even apply to a different location.&nbsp; As an example, consider a set of identical twins.&nbsp; If one goes to a high school that focuses on science and technology, and the other goes to a high school that focuses on artistic talents, like acting or music, the twins will have different capabilities when they graduate.&nbsp; The area that the learning occurs in impacts the end result.&nbsp; The trap is assuming that the expert system can apply across everything, which it is specifically focused.<\/p>\n\n\n\n<p>In the latter part of the 1990s, Text Mining was developed to analyze and correlate relations among documents based on occurrence of text phrases.&nbsp; Based on the phrases, it was possible to develop the frequency of occurrences between other items in the documents.&nbsp; Based on the identified frequencies, it is possible to predict correlations among items identified in the original text analysis.&nbsp; This provided the program builders with a means of pulling specific content from existing documents.<\/p>\n\n\n\n<p>While most people don\u2019t realize it, ChatBots have been around for more than ten years.&nbsp; What is a ChatBot?&nbsp; It is a computer program that can simulate a conversation with a person.&nbsp; These started as simple question and answer communications with the ability to either speak your answer of enter&nbsp; something from a keyboard.&nbsp; As additional decision making was added (i.e., artificial intelligence) to assist the person through a selection of choices.&nbsp; \u201cThe latest evolution of AI chatbots, often referred to as <a href=\"https:\/\/www.ibm.com\/products\/watsonx-assistant\">\u201cintelligent virtual assistants\u201d or \u201cvirtual agents,\u201d<\/a> can not only understand free-flowing conversation through use of sophisticated language models, but even automate relevant tasks.\u201d [Ref. 1]<\/p>\n\n\n\n<p>This brings us to the latest efforts that are known as Generative AI, which can pull from a vast amount of date to produce text, images, videos, etc., that appears to be original concepts.&nbsp; Yes, the information that is provide may appear to be novel, but it is based on a collection of existing data and an algorithm(s) that control how the computer directs the accumulation of data and in what manner the results are presented. &nbsp;&nbsp;There is a concern that the control of the algorithms provides the ability of what type of results will be provided.&nbsp; An article in the November 30, 2023 issue of the Wall Street Journal provides an argument for these algorithms to be open source and available to all. [Ref. 2]<\/p>\n\n\n\n<p>That bring us to the title of this blog.&nbsp; If one considers the computational power available, the analysis of multiple combinations of molecules based on a predetermined set of characteristics can be employed to eliminate a lot of possible combination and provide some strong suggestions for researchers to evaluate.&nbsp; The program is building on historical data and algorithms to do its specific analysis.&nbsp; The same type of effort can be applied to novel combinations of materials\/elements.&nbsp; With whatever guidelines are incorporated in the algorithms, the results can provide novel materials.&nbsp; Some would say the computer \u201ccreated\u201d the new drugs, materials, or whatever.&nbsp; In reality, the results were created by the people who created the algorithms \u2013 human input and human direction.&nbsp; This raises an interesting question.&nbsp; Are the people who created the algorithms the real owners of the \u201cdiscoveries\u201d?&nbsp; Something for the courts to decide in the future.<\/p>\n\n\n\n<p>References:<\/p>\n\n\n\n<ol class=\"wp-block-list\">\n<li><a href=\"https:\/\/www.ibm.com\/topics\/chatbots#What+is+a+chatbot%3F\">https:\/\/www.ibm.com\/topics\/chatbots#What+is+a+chatbot%3F<\/a><\/li>\n\n\n\n<li><strong>\u201cAI Needs Open-Source Models to Reach Its Potential\u201d <\/strong>Wall Street Journal Thursday, 11\/30\/2023 Page .A017<\/li>\n<\/ol>\n","protected":false},"excerpt":{"rendered":"<p>Artificial Inelegance (AI) seems to be in the news with promises of a tremendous amount of benefits for the average person.&nbsp; Specifically, \u201cGenerative AI\u201d will be the foundation [..]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[12],"tags":[],"class_list":["post-591","post","type-post","status-publish","format-standard","hentry","category-science"],"_links":{"self":[{"href":"http:\/\/www.nano-blog.com\/index.php?rest_route=\/wp\/v2\/posts\/591","targetHints":{"allow":["GET"]}}],"collection":[{"href":"http:\/\/www.nano-blog.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"http:\/\/www.nano-blog.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"http:\/\/www.nano-blog.com\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"http:\/\/www.nano-blog.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=591"}],"version-history":[{"count":1,"href":"http:\/\/www.nano-blog.com\/index.php?rest_route=\/wp\/v2\/posts\/591\/revisions"}],"predecessor-version":[{"id":592,"href":"http:\/\/www.nano-blog.com\/index.php?rest_route=\/wp\/v2\/posts\/591\/revisions\/592"}],"wp:attachment":[{"href":"http:\/\/www.nano-blog.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=591"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"http:\/\/www.nano-blog.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=591"},{"taxonomy":"post_tag","embeddable":true,"href":"http:\/\/www.nano-blog.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=591"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}