Two-Dimensional Meta-Materials

The term meta-materials refers to materials that are created by producing material structures that do not occur in nature and also can be created with structural complexity that also would not occur in nature. Graphene has long been a material of interest. For a long time, the issue with graphene has been the ability to produce it in large areas without any defects. The work done in Reference 1 was focused on evaluating a nano electronics platform based on graphene. The interest is because the technology is compatible with conventional semiconductor manufacturing. This work was based on the results of research that found a layer of graphene formed on the top of silicon carbide crystal. It was discovered that electric currents flow without resistance along the edges of this material plus the graphene devices could be interconnected without metal wires. The researchers observed that the electrons could travel over large distances, microns, without scattering. Previous technologies could only obtain 10 nm before scattering. Their estimates are that it will be up to 10 years before the graphene-based electronics could be realized in volume manufacturing.

A slightly different class of two-dimensional meta-materials is called MXenes. These MXenes are part of a large family of nitrides and carbides of transition materials constructed in two dimensional layers where two or more of the metal layers are interspersed by a carbon or nitrogen layer. This surface is finished off with a termination layer. According to the researchers [Reference 2], these MXenes can be fabricated as nanometer thin flakes that can be better dispersed in water and inked onto any surface. They can also be made as films, fibers and even powders research areas using these materials includes optoelectronics, electromagnetic interference shielding, wireless antennas, total catalyst, water purification, biosensing, and many more. There is also the possibility of using these materials as alternatives to lithium-ion batteries. The issue right now is that this material tends to oxidize and degrade quickly in ambient operating conditions. Removing the oxidation will require some additional inventions. Work done in Australia has found one method to work to remove the oxidation, which focuses a 10 MHz frequency beam, which breaks the bond of the oxidation. Some work in China has use this material as an electrochemical biosensor that is coupled with gold nano arrays to attempt to have a noninvasive cancer detection system. One of the challenges using this material is that there are an extremely large number of possible configurations. Finding the best ones to work with will require significant computational analysis.

Reference 3 looks at a new layering technique for two dimensional materials with the possibility of being able to tune the materials for different applications. One of the findings was that sandwiching atomic layers of a transition metal like titanium between monoatomic layers of another metal, like molybdenum, and using carbon atoms to hold them together.  The researchers discovered that a stable material can be produced. A key result of their work which could be beneficial in the future is that they have found a way to combine elemental materials into a stable compound, which will exhibit new properties. This particular arrangement of atomic structures opens up the possibility to fine tune the resulting molecular structure and its related physical properties to meet certain stringent applications that at the present time cannot be considered.

The development of the atomic layer materials and the ability to manipulate them into ways that produce different characteristics is opening up an entirely new world for researchers to create new, and previously unknown, material properties. This is not something that will happen immediately but the effort is providing a whole new branch of scientific experimentation. It will be interesting to see what the future brings.

References:

  1. https://www.graphene-info.com/researchers-take-step-towards-graphene-electronics
  2. https://spectrum.ieee.org/why-mxenese-matter
  3. https://spectrum.ieee.org/new-method-for-layering-2d-materials-offers-breakthrough-in-energy-storage
Nanotechnology, Semiconductor Technology

Scientific/Medical Integrity and the Future

Over the years, we have witnessed the issues or multiple peer-reviewed papers being recalled.  A recent example as reported in numerous places, Reference 1 states: “The Dana-Farber Cancer Institute (SCFI), an affiliate of Harvard Medical School, is seeking to retract six scientific studies and correct 31 others that were published by the institute’s top researchers, including its CEO. The researchers are accused of manipulating data images with simple methods, primarily with copy-and-paste in image editing software, such as Adobe Photoshop.”

There were allegations of data manipulation in 57 SFCI-led studies. [Ref. 2] There has been an increase in the application of AI applications being employed to check for fraudulent imagery.  In an editorial [Ref. 3] in Science, they assert that they are using Proofig to look for image duplication or other types of image modifications.  They also employ iThenticate for plagiarism detection. 

In a related area, AI is running into copyright difficulty with its generated images.  The IEEE Spectrum magazine [Ref. 4] has an article on the potential for copyright violations.  One example shows a generated article almost 90% identical in words and sentences from a New Youk Times article.   While this article references this type of result to plagiaristic outputs, it is plagiarism if a person did that.  The ability of AI generated texts to create imaginary references has been referenced as having hallucinatory output.  A key question that was generated was: is there any way for a user of the generative AI to ensure there is not copyright infringement or plagiarism?  A good question that will need to be answered.  In the evaluation of images, the researchers found hundreds of instances where there was very little difference for recognizable characters in video and games.  This analysis was based on a very limited study of subjects (a few hundred). 

While the use of Generative AI is becoming more widespread, even careful reviews of the data and pictures will not prevent the misuse of the results.  In the April 2020 Blog [Ref. 5] the topic of scientific integrity and COVID-19 was covered in detail.  The key points were that even with a solid research foundation the results can be subject to misinterpretation by people who are unfamiliar with various techniques of analyzing the data.  Another point in that blog is that when the results of an analysis are reduced to a single number, the potential for creating inappropriate impressions is high.  So, the construct of the model and the assumptions are very important.

This brings up another question of what are the under pinnings of Artificial Intelligence programs.  What are the algorithms that are being employed AND do these algorithms interact with each other.  As described in earlier blogs involving expert systems work in the 1980s, the expert system is based on the environment (data analyzed) it was created for.  The expert systems then improved its performance based on the new data acquired though its operation.  This is a problem of self-biasing.  AI programs are built on a base of information.  Sometimes the data absorbed is protected, e.g., the New York Times database.  So, all the data might not be available.  If one were to focus on a single database and develop that for projecting future information, there would be significant difference in news projection depending on if the data were obtained from CNN or Fox News. 

The applications and even the development of new tools for creating reports and the complementary programs for evaluating the veracity of the information presented are still in the very early stages of development.  This year, 2024, should witness some interesting development in the application of AI tools.  Significant assist in medicine is being provided already and more should be coming.  It just requires careful application of the programs and understanding the data.

References:

  1. https://arstechnica.com/science/2024/01/top-harvard-cancer-researchers-accused-of-scientific-fraud-37-studies-affected/
  2. https://arstechnica.com/science/2024/01/all-science-journals-will-now-do-an-ai-powered-check-for-image-fraud/
  3. https://www.science.org/doi/10.1126/science.adn7530
  4. https://spectrum.ieee.org/midjourney-copyright
  5. http://www.nano-blog.com/?p=370
Science

Year end Wrap-up

The most popular technical/computer topic at the end of 2023 was Generative Artificial Intelligence (AI), which was briefly touched on in last month’s blog.  As 2023 draws to a close, the New York Times is suing the major developers of Generative AI for using their copyrighted news database without permission or compensation. [Ref. 1]  On the restrictive side, the UK’s top court decided that AI can not be named as an inventor on a patent. [Ref. 2]  It also indicated that the person who owned the program results was not the owner of the patent, because he was not named on the patent application as inventor.  This should make for an interesting upcoming year and patent law.

Regarding materials and semiconductors, there is a proposed new approach to semiconductor material.  Ferroelectric semiconductors are being studied.  The issues of speed, size including thickness (or thinness) and operation at high speed and high power are a challenge for moving into larger, bigger, faster devices.    The University of Michigan research [Ref. 3] is focused on ferroelectric high electron mobility transistor (FeHEMT).  Ferroelectric semiconductors can sustain an electrical polarization, think magnetism.  But, the ferroelectric semiconductor can switch which end is positive and which is negative. In other words, the transistor can change how it functions.

Researchers at Lund University in Sweden [Ref. 4] have shown a configurable transistor.  The potential for this device is a more precise control of the electronics. Their work is with III-V materials to replace silicon.  The promise is high-frequency applications (6G and 7G networks) while reducing the power required.  The application would significantly benefit neuromorphic computations, which would enable stronger AI applications.  They examined new ferroelectric memory with tunnel barriers in order to create new circuit architectures (transistor type memory).  A key part of this work is the creation and placement of ferroelectric grains in the device structure.  This is a ferro-TFET transistor.  Like the development mentioned above, the properties of the transistor can be modified during the operation of the device.  One advantage is the “new” properties of the device remain constant even without any power needed to keep their state. 

Researchers from Northwestern University, Boston College, and MIT are pursuing a different type of transistor function. [Ref. 5]   They claim it can store and process information simultaneously, like the human brain.  A key difference form previous research is that the focus is bringing the memory and processing functions together without the necessary time lag of transporting the electrical signals.  Their claim is that by layering different patterns, two dimensional materials are formed that have novel properties from the individual materials.  The researchers stacked bilayer graphene and hexagonal boron nitride.  By rotating one layer with respect to the other, different properties could be developed in each graphene layer.  One lead researcher introduced a new nanoelectronic device that appears to be capable of manipulating data in an energy efficient manner.  In their experiment, that have demonstrated their synaptic transistor can identify similar patterns.  The additional claim is that the new device can provide a major step forward in AI applications.

It appears that the work on novel transistor structures and functionality might provide higher frequency applications with the potential of reducing the total power requires.  The power reduction directly effects the reduction of the heat generated by the devices.  We can expect more results in the coming 2024 year.

References:

  1. https://www.nytimes.com/2023/12/27/business/media/new-york-times-open-ai-microsoft-lawsuit.html
  2. https://www.theguardian.com/technology/2023/dec/20/ai-cannot-be-named-as-patent-inventor-uk-supreme-court-rules  
  3. Fully epitaxial, monolithic ScAlN/AlGaN/GaN ferroelectric HEMT – https://pubs.aip.org/aip/apl/article-abstract/122/9/090601/2880773/Fully-epitaxial-monolithic-ScAlN-AlGaN-GaN?redirectedFrom=fulltext
  4. https://www.lunduniversity.lu.se/article/cutting-edge-transistors-semiconductors-future
  5. https://news.northwestern.edu/stories/2023/12/new-brain-like-transistor-mimics-human-intelligence/
Semiconductor Technology, Technology

Artificial Intelligence (AI) and Science

Artificial Inelegance (AI) seems to be in the news with promises of a tremendous amount of benefits for the average person.  Specifically, “Generative AI” will be the foundation of all these benefits.   Before we start believing all the super benefits from Generative AI programs like ChatGPT, it is instrumental to understand the background behind these newer programs.

In the early 1980s, personal computers started becoming a tool that was adopted by large businesses.  The development of spreadsheet programs, like SuperCalc, provided the ability to analyze large amounts of data that would have required many weeks of effort from many people.  It was simple to create a database so that one that could store, modify, and analyze data in vast amounts that were not possible previously. 

The next step was the ability to create a program with self-learning, an expert system.  The earliest ones were simple.  One could develop a system that evaluated results and then make predictions on what the possible cause was or provide guidance for a future outcome.  One of the first ones, the author developed was an analysis of quality failures from a manufacturing facility.  There was a history of failures in the final testing and related causes that had been previously identified.  Once the system was in place, future identified failures would be identified along with the cause and that data entered into the database.  Over time, the ability to predict the cause of failures would become more accurate due to the additional data being entered.  This type of expert system has a built-in trap! 

What is the trap?  When the data is being collected from one location it will be applicable to that specific location and may not even apply to a different location.  As an example, consider a set of identical twins.  If one goes to a high school that focuses on science and technology, and the other goes to a high school that focuses on artistic talents, like acting or music, the twins will have different capabilities when they graduate.  The area that the learning occurs in impacts the end result.  The trap is assuming that the expert system can apply across everything, which it is specifically focused.

In the latter part of the 1990s, Text Mining was developed to analyze and correlate relations among documents based on occurrence of text phrases.  Based on the phrases, it was possible to develop the frequency of occurrences between other items in the documents.  Based on the identified frequencies, it is possible to predict correlations among items identified in the original text analysis.  This provided the program builders with a means of pulling specific content from existing documents.

While most people don’t realize it, ChatBots have been around for more than ten years.  What is a ChatBot?  It is a computer program that can simulate a conversation with a person.  These started as simple question and answer communications with the ability to either speak your answer of enter  something from a keyboard.  As additional decision making was added (i.e., artificial intelligence) to assist the person through a selection of choices.  “The latest evolution of AI chatbots, often referred to as “intelligent virtual assistants” or “virtual agents,” can not only understand free-flowing conversation through use of sophisticated language models, but even automate relevant tasks.” [Ref. 1]

This brings us to the latest efforts that are known as Generative AI, which can pull from a vast amount of date to produce text, images, videos, etc., that appears to be original concepts.  Yes, the information that is provide may appear to be novel, but it is based on a collection of existing data and an algorithm(s) that control how the computer directs the accumulation of data and in what manner the results are presented.   There is a concern that the control of the algorithms provides the ability of what type of results will be provided.  An article in the November 30, 2023 issue of the Wall Street Journal provides an argument for these algorithms to be open source and available to all. [Ref. 2]

That bring us to the title of this blog.  If one considers the computational power available, the analysis of multiple combinations of molecules based on a predetermined set of characteristics can be employed to eliminate a lot of possible combination and provide some strong suggestions for researchers to evaluate.  The program is building on historical data and algorithms to do its specific analysis.  The same type of effort can be applied to novel combinations of materials/elements.  With whatever guidelines are incorporated in the algorithms, the results can provide novel materials.  Some would say the computer “created” the new drugs, materials, or whatever.  In reality, the results were created by the people who created the algorithms – human input and human direction.  This raises an interesting question.  Are the people who created the algorithms the real owners of the “discoveries”?  Something for the courts to decide in the future.

References:

  1. https://www.ibm.com/topics/chatbots#What+is+a+chatbot%3F
  2. “AI Needs Open-Source Models to Reach Its Potential” Wall Street Journal Thursday, 11/30/2023 Page .A017
Science

Analog & Digital – Part II

Today’s electronics are based on the transistor, which can be switched either on (a one) or off (a zero).  Among other things, the sounds coming from musical instruments are basically smooth variations, which are described as analog signals.  In order for the signals (electronic representations of the sound waves) to be processed, there must be a conversion from the analog wave to a digital representation of the original wave.  An analog to digital converter continuously transforms analog signals into the binary (ones and zeros) equivalent at each instant of measurement that can be processed by the electronic equipment.

Consider the piano and violin representation of their frequencies between C and B (twelve notes) on a piano.  Figure 1 shows the 12 notes from C through B, with the A being the key frequency of 440 Hz.  [Re. 1] The point of this chart is to present the fact that the piano has fixed notes and is more like digital information from a computer.  The violin, on the other hand would provide a continuous range of the various frequencies.  (There is also a slight variation of the piano key frequencies from a straight line for the violin.)

Figure 1

One interesting item is that since the piano has discrete frequencies associated with the various keys, the actual sound is not a smooth ramp up and down like a sine wave. 

Figure Two depicts a violin and a piano note and shows the rapid drop-off of the note from the violin.  [Ref. 2]

Figure 2

The piano note is created by the key striking the stretched piano wire that has been properly tuned.  The initial strike is full of non-harmonic tones that dampen down rapidly. [Ref. 3] This reverberation from the strike produces the wave form.  The violin is different in that the vibrating string creates harmonics that create the multitude of peaks and valleys shown in Figure 3 [Ref. 4]. 

Figure 3

I will borrow some explanation of harmonics from reference 5.  It explains that the instrument produces different wave shapes based on the shape of the instrument. that demonstrate comparison of wave shape and harmonics.  Another factor is the way the note is played.  “If you press a piano key and release it, the sound changes volume gradually over time. First, it rises quickly (or “attacks”) to its maximum volume. Next, the sound “decays” to a lower level and stays there or “sustains.” Finally, when we let go of the key, the sound “releases” and dies down to silence.”

” There are other factors too. An instrument doesn’t just produce a single sound wave at a single pitch (frequency). Even if it’s playing a steady note, it’s making many different sound waves at once: it makes one note (called a fundamental frequency or first harmonic) and lots of higher, related notes called harmonics or overtones. The frequency of each harmonic is a multiple of the fundamental frequency. So, if the fundamental frequency (also called the first harmonic) is 200Hz (which we can think of as simply 1 × 200Hz), the second harmonic is 400Hz (2 × 200Hz), the third is 600Hz (3 × 200Hz), the fourth is 800Hz (4 × 200Hz), and so on. Playing together, the harmonics make a dense, complex sound a bit like a barber’s shop choir, with low voices and high voices all singing in tune. The more harmonics there are, the richer the sound.”

Getting back to Analog and Digital, what the instrument is delivering is a continues series of complex waves that include small and smaller portions of the waves.  Taking these waves and “chopping” them up into discrete segments has to average out the variation in each of the measurement size chunks of sound waves.  So, no matter how fine one dissects the wave, there is averaging with each bite-sized piece of the actual music.  It is possible to take that measurement down to a small enough size that the typical listener will not notice the subtle differences from the analog.  BUT, sound can also be subtly felt.  No matter how small the averaging size, there will be a difference from the original analog sound wave.  Consequently, vinyl (records) is making a come back due to the superior reproduction of actual sounds due to vinyl being an analog delivery of the sounds.

References:

  1. https://www.intmath.com/trigonometric-graphs/music.php
  2. https://www.shaalaa.com/question-bank-solutions/two-musical-notes-of-the-same-pitch-and-same-loudness-are-played-on-two-different-instruments-their-wave-patterns-are-as-shown-in-following-figure-properties-of-sounds_36995
  3. https://dsp.stackexchange.com/questions/46598/mathematical-equation-for-the-sound-wave-that-a-piano-makes
  4. https://www.google.com/search?q=violin%20sound%20wave&tbm=isch&client=firefox-b-1-d&hl=en&sa=X&ved=0CCAQtI8BKAFqFwoTCMD74Mi9-YEDFQAAAAAdAAAAABAU&biw=1542&bih=994https://www.google.com/search?q=violin%20sound%20wave&tbm=isch&client=firefox-b-1-d&hl=en&sa=X&ved=0CCAQtI8BKAFqFwoTCMD74Mi9-YEDFQAAAAAdAAAAABAU&biw=1542&bih=994
  5. https://www.explainthatstuff.com/synthesizers.html
Technology

Analog & Digital – Part I

There is no question that we live in a digital world.  What does that mean?  Is there another world?  What is the difference?  We live in a world of computers.  Computers are based on components of electrical circuits.  The basic part of the circuit is defined by either being “on” (conducting electrical signals) or “off” (not conducting).  Each of these basic circuit parts in considered a “bit”.  So, a 32 bit controller divides the control in to 32 steps, while a 64 bit has 64 steps.  Notice, the term “steps”.  If one considers a child’s slide, it is a continuous slope.  If we were to make that digital, it would have small steps in place of the continuous slide.  We would not do that, of course.  But we do that with electronics.  Have you ever tried to adjust the volume from a speaker and found one setting was too much and next lower setting was too little?  That is due to the digital steps in the controller.  It is possible to increase the steps to make it appear to be smooth, but it would still be steps.

From Reference 1: “An analog computer or analogue computer is a type of computer that uses the continuous variation aspect of physical phenomena such as electrical, mechanical, or hydraulic quantities (analog signals) to model the problem being solved. In contrast, digital computers represent varying quantities symbolically and by discrete values of both time and amplitude (digital signals).”  Analog computers were used in the 1940 to provide assistance with fire control weapons systems during World War II among other applications. 

The digital computer became the standard for multiple reasons.  The speed of calculation became greater and greater than analog computers.  There was one fundamental issue that hindered the application of analog computers.  Reprogramming was accomplished by rearranging the interconnecting wires/cables within the assembly.  Digital computers in the early days had basic system instructions (neumonic identifiers) that were entered into the computer via numerical codes.  This was slow but faster than rewiring the system.  When the operating systems emerged for digital computers, the computer could perform a multitude of tasks and not be designed for specific problems.

If digital is faster (and better?) than analog, why are people going back to vinyl records?  Why do these people state that the quality of the music is better?  Vinyl records are made by equipment that records the actual frequencies as analog signals into to a master copy that captures the nuances of the changes in frequencies.  This implies that the equipment can record the analog signals, which implies a system with vacuum tubes and not computer-generated analog to digital converters.  To obtain the true effect of the music to be similar to being at and actual performance (not amplified by digital equipment) requires the equipment playing the sounds is also capable of producing analog signals with out electronic digital to analog conversion. 

In a very crude example.  A piano key strikes a chord that vibrates at a given frequency.  The key next to it is at a known, discrete frequency.  E.g., the middle C of a piano is 261.6 HZ, the black key to the right has a frequency of 277.18 Hz.  This is not continuous as it would be for an instrument like a violin.  The piano is the digital instrument (discrete bits) and the violin is the analog (continuous frequencies).  The recording in Reference 2 was done during a practice session by “eg” and has both piano and violin music.  The quality of the recording is not very good, but the differences between the piano (digital) sound and the violin (analog) sounds are obvious.  This example was recorded in an auditorium with a small voice recorder, which was digital.  The point is there is a recognizable difference between the analog and digital type sounds.

Next month, Part II will delve into some of the subtleties of employing digitally modified analog signal along with more on sound itself.  The modification does not come without consequences. 

References:

  1. https://en.wikipedia.org/wiki/Analog_computer
  2. https://recorder.google.com/20b5bb1a-56cd-407a-a24c-c5efc2d92bf3
Technology

Power Challenges for Greater Computing

As mentioned in previous blogs [Ref. 1] the power consumption to manufacture semiconductors has raised concern.  The International Roadmap for Devices and Systems (IRDS) [Ref. 2] has been evaluating that issue.  There is no short-term solution as the complexity of the circuitry continues to increase.  Changes that are being evaluated, e.g., chiplets, do not reduce the demand for semiconductor devices, but are options to produce and yield devices more efficiently.

Reference 3 provides an interesting view into alternate power sources for high performance devices.  The article indicates that high-end CPUs can require as much as 700W per chip.  As shown in the chart in June 2023 blog, the power consumption is trending to be beyond the requirements to obtain enough power to satisfy the demand.  As pointed out, architectures at 7nm and below require lower operating voltages, which creates a need for higher currents.  The problem with the power delivery networks PDNs) is that they must be able to maintain a constant power supply under conditions that can be very rapidly changing.  One solution is to change the IT infrastructure from 12 volts to 48 volts. 

What difference does this make?  For the same power, the current at 48 volts is ¼ the current required at 12 volts.  The power loss is defined by I2R.  So, reducing the current by a factor of 4 reduces the power loss by a factor of 16.  Reference 3, provides an example of a 10MW data center that would save 1.4 million kWh per year.  Not considering the cost saving, the reduction in the power consumption can help lower the projected power consumption in the country.  To make this happen requires a change in most of the large computing centers equipment.

This in not addressing the power required to run the Artificial Intelligence processors.  Reference 4 states: “It’s unclear exactly how much energy is needed to run the world’s AI models. One data scientist trying to tackle the question ended up with an estimate that ChatGPT’s electricity consumption was between 1.1M and 23M KWh in January 2023 alone. There’s a lot of room between 1.1 million and 23 million, but it’s safe to say AI requires an obtuse and immense amount of energy. Incorporating large language models into search engines could mean up to a fivefold increase in computing power. Some even warn machine learning is on track to consume all energy being supplied.”

There was an interesting comparison at the end of the reference 4 article: “A computer can play Go and even beat humans,” said McClelland. “But it will take a computer something like100 kilowatts to do so while our brains do it for just 20watts.”

“The brain is an obvious place to look for a better way to compute AI.”

So will we continue to make larger and larger models or will we find better solutions?  Is it time to rethink the standard semiconductor architecture?  Part of this is being pursued through the development of chiplets, which can permit placing memory closer to the processor that needs the memory.  That would cut down on the circuitry wiring the signal needs to traverse.  In turn, that should reduce the power consumption and speed up the computing process.  With new materials being developed, new potential applications should be possible.  The future always has a way of surprising us.  We shall see what happens next.

References:

  1. April, May, and June 2023 blogs
  2. https://irds.ieee.org/editions
  3. https://www.eetimes.com/the-challenges-of-powering-big-ai-chips/
  4. https://blog.westerndigital.com/solving-ai-power-problem/
Semiconductor Technology

More Optical Metamaterials

Previous blogs have covered metamaterials for optics including an invisibility cloak.  Reference 1 moves into the shorter wavelengths of extreme ultraviolet (EUV) light.  It is interesting in that the original work waw focused on attosecond physics.  (atto is 10-18)  This field of physics is working to understand physical processes like photoelectric effect by creating short pulses of EUV.  The image area they were working on was to incorporate a single focal point of 10mm.  They were using 50nm EUV light.  The final design consisted of a 200nm film etched wit million holes.  The key accomplishment is they produced a membrane for 50nm EUV light that behaves like a lens with optical light.  Current semiconductor EUV lithography is in the 12nm wavelength range and uses reflective optics.   This is not a simple transition to take the existing work and make it into semiconductor masks.  But, it shows that it is possible to use metamaterials to focus wavelengths that we currently do not have a good method for achieving the results.  This is not a complete project for semiconscious.  The researchers indicated the fabrication of this metalens required creating images that were five time smaller than they have previously done to create the focal point.  Semiconductors would require a further reduction by a factor of four and be able to design structures that have multiple type of structure images.

An article [Ref. 2] in the July 2023 issue of IEEE Spectrum magazine addressed the challenges in today’s shrinking cameras that exist in phones and many other products.  It points out that the most space-consuming part of the camera is the lens.  It is typically a difficult trade off.  A shorter focal length (small distance to the imaging device) requires a thick center part of the lens (more thickness equals more space).  This does not consider that stronger curved create aberrations that distort the different wavelengths of the image, which require additional optics.  SO the solution was to replace conventional optical technology with a new technology – the metalens.  The metalens is manufactured using semiconductor processing technology to create structures a few hundred microns thick.  The article employs an example of a shallow marsh with grass standing in water.  The grass moves with the incoming water and changes the position of the grass.  Then different height grass would have different effect on the overall picture due to the motion of each individual stalk of grass. The following is directly from the article [Ref. 2]:

“The objects in the scene bounce the light all over the place. Some of this light comes back toward the metalens, which is pointed, pillars out, toward the scene. These returning photons hit the tops of the pillars and transfer their energy into vibrations. The vibrations—called plasmons—travel down the pillars. When that energy reaches the bottom of a pillar, it exits as photons, which can be then captured by an image sensor.  Those photons don’t need to have the same properties as those that entered the pillars; we can change these properties by the way we design and distribute the pillars.”

The design incorporates both height and thickness of the ”stalks” to change the characteristic of the incoming energy.  There is always more work to create finer structures and more precise heights.  However, the metalens is an engineered material construct that can provide wave control functions previously unavailable.

References:

  1. https://www.laserfocusworld.com/optics/article/14293476/metalens-controls-light-within-the-extreme-ultraviolet-realm
  2. https://spectrum.ieee.org/metalens-2660294513
Metamaterials, Semiconductor Technology

The Issue with Artificial Intelligence (AI)

The issue of AI has been in the headlines of major newspapers with all kinds of doom projections.  Reference 1 from earlier this year has 11 areas that should create worry.  These items include replacing humans a variety of jobs resulting in significant reductions in the work force.   The one that is appropriate in today’s concerns in the impact on the environment. 

AI can provide a help in establishing low emission infrastructure and other related efforts that can be assisted by increased algorithms that provide a better understanding of the activities impacting the environment.  This sounds good, but, as with most things, there is a catch.

The training required for advanced AI models, quality data takes computing power to obtain and process.  Employing the results to train a significant focused model requires energy consumption.  Reference 2 has examples of training data size for models.  OpenAI trained it GPT-3 on 45 terabytes if data.  Microsoft trained a smaller system (less data) using 512 Nvidia GPU for nine days.  The power consumption was 27,648 watts or enough energy to power 3 homes for a year.  This was for a smaller model than GPT-3.

As the capabilities of the models increase the amount of data grows exponentially.  Reference 3 has a graph, Figure 1, projects the Machine Learning systems will be pushing its total of the world power supply.  The question of why the energy demand is growing so quickly is that more accurate models require more data.  More accurate models generate more profitability.

There is another issue.  The available semiconductor processing capability is a limiting factor.  Therefore, more wafer fabs are required, which fabs in themselves, area a power consumption concern.  The storage clouds are not exempt from this increase in power requirements.  Reference 4 indicates that the power consumption of the computer racks in the storage center require 4 times as much power as a tradition CPU rack.  There is work being done in this area to reduce the power requirements, but that reduction is being outstripped by the increase in data being processed.

What is the difference between the CPU and GPU racks that increase the power consumption?  A CPU (Central Processing Unit) is the main controller for all the circuitry.  It covers a variety of processes and runs processes serially with a number of cores that does not currently exceed 64.  Most desktops have less than 12 cores.  This unit is efficient at processing one task at a time.  The GPU (Graphical Processing Unit) is specially designed to handle many smaller processes at a time, like graphics or video rendering.  The core count is in the thousands to run processes in parallel. 

This provides the specialized GPU with the ability to have a heavy load of processing without having to be concerned with the other tasks the computer needs to do.  So, the circuitry does not include the capability of day-to-day operations, but has extra computational power directed at specific circuits.

The net result is that faster processing employs more power.  That power must be generated somewhere.  That increase in electricity generation raises concerns about the total impact on the environment.  This is where AI becomes an environmental concern.

References:

  1. https://blog.coupler.io/artificial-intelligence-issues/
  2. https://www.techtarget.com/searchenterpriseai/feature/Energy-consumption-of-AI-poses-environmental-problems
  3. https://semiengineering.com/ai-power-consumption-exploding/
  4. https://flexpowermodules.com/ai-the-need-for-high-power-lev
  5. https://www.cdw.com/content/cdw/en/articles/hardware/cpu-vs-gpu.html#:~:text=The%20primary%20difference%20between%20a,based%20microprocessors%20in%20modern%20computers.
Science, Technology

3-D and Electronics

There has been a lot written about 2-D transistors and the coming applications involving the properties of the circuitry and its ability to create denser circuitry.  There are challenges as the circuit density increases.  While very small, the distance that an electrical signal needs to travel reduces the effective speed of the processor.  So, one solution is to place small pieces of memory near the processor circuitry.  It is also possible to place small amounts of specialized circuitry near other resources required by that circuitry.

Let’s consider the issues as stand-alone problems, which they are not.  If one considers chiplets, there is the problem of aligning the chiplet with the circuitry that it is being attached to. Since the line widths are on the order of low digit nanometers alignment is critical. The methodology for blind alignment would require high precision on the dimensions of the chiplet. One solution would be to thin the wafer so that the circuitry being mounted is transparent and can be accurately aligned. While this may seem unreasonable, thinning wafers to below 30 µm changes the transmission of light through the wafer so that alignment can be done with a great deal of accuracy. Next, let’s consider attachment. Since the circuitry is being miniaturized the available space for bonding pads becomes much smaller. So, the question that comes up is how much is the minimum amount of area that is required to guarantee a connection which does not change under temperature loading. Work is being done in this area, but there is no agreed upon direction at this time. Even when these problems are sufficiently solved to permit manufacturing, the question comes up how to inspect the joints/connections of the two pieces of circuitry.  Visual inspection is highly improbable since even thinned wafers would have circuit minds on the substrate that blocked the ability to visually inspect. Solving this problem then raises another question. The current design of semiconductors is such that the bottom of the semiconductors can be mounted tightly to a heat transfer material. This permits the ability to cool the devices that are generating heat and take that heat away from the circuit. High temperatures over long periods of time tend to degrade the performance of the circuitry. If one considers the stacked circuits, the upper portions of the stack circuit do not have the thermal conductivity that would exist if it were a single level of circuitry. That raises the question of what is the heat contribution to these upper-level devices and will it cause early failure. This needs to be addressed, and people are working on it. However, we don’t have the solution in hand yet. So, 3-D circuitry has potential but there are many issues that need to be addressed.

One area that 3-D had provided some promise is the attempt to print batteries onto circuits.  Back in 2016, there were proposals to employ a 3-D holographic lithography to create these batteries [Ref. 1].  The limiting factor is the 3-D creation of thee required electrode formation.  There are current claims regarding the development of 3-D printed batteries [Ref. 2], but the public release has been slow.

3-D electronics has potential to improve the existing products, but there is still much research and development required.

References:

  1. http://www.rdmag.com/news/2015/05/3-d-microbattery-suitable-large-scale-chip-integration
  2. https://spectrum.ieee.org/3d-printing-solid-state-battery-lithium-ion#toggle-gdpr
Electronics, Semiconductor Technology, Uncategorized