Artificial Intelligence (AI) and Nano Technology

AI” has become a “hot” topic in both technical publications and general newspapers.  There have been multiple stories of ChatGPT 3.5 and 4.0 and other Chatbots [Ref 1]. The addition of AI to search engines by Google (BARD) and Microsoft’s Bing (ChatGPT+) [Ref. 2].  Browsers, like Chrome, have been adding AI capable features [Ref. 3].  The has been news from the medical field of AI being employed by medical professionals and improving the diagnosis of patients [Ref. 4].  On the negative side, there was a publication about Chaos-GPT with very negative connotations [Ref. 5].  How do these chatbots apply to nano technology, or does the technology even apply?

There have been many attempts to produce tools that can assist humans in making decisions or even make the decisions itself.  Automation of equipment is an obvious example.  In the lumber industry, equipment has been developed that inspects a segment of a harvested tree, calculates the orientation to get the maximum lumber from the tree, and then does the actual cutting [Ref. 6].  The ongoing work for self-driving vehicles is another application of AI.  The millions of lines of code keep increasing as new options have to be allowed depending on the circumstances that the vehicle is encountering.

ChatGPT was developed by OpenAI as a chatbot [Ref. 7] and is different from previously released chatbots.  While it was released in November of 2022, it was not until later March 2023 that its applications started making headlines because the responses did not require accessing a restricted set of data, but permitted unstructured assembling on the response data to random questions in a manner and format that provides the appearance of a knowledgeable response.  Algorithms are developed to guide the collection and organization of data relevant to the subject under investigation.  A proper arrangement of algorithms can make it appear as being answered by a person.

In the 1980s, there was significant work on expert systems, which are a precursor to today’s algorithm driven chatbots.  Computing power and the cost of storage were orders of magnitude less capable than today.  The amount of data available was significantly less and the speed of the computations were much slower.  Still there were interesting developments.  One of the observations from that work was that each expert system had to have a starting base of data.  As the system encountered additional data of choices and the outcomes, the database changed the probabilities of the possible outcomes.  So, the system “evolved” based on its environment, i.e. machine learning.  A system for farming in colder climes would provide different answers from one in the tropics.  Understandable, due to the two systems being distinct.

Today’s computing power is orders of magnitude greater than the early 1990s.  The memory capacity has also increased greatly.  But so has the data.  It the author’s opinion that there is more data created and stored on line in a single day now than there was in an entire year in the 1990s.   This raises the question of where will and how will the chatbots get their information.  One of the recent reviews indicated that it is possible that some chatbots have information that was current in 2018.   A lot happens in five years.

A recent article [Ref. 8] express the concerns of an AI ethicist.  The development of machine-learning algorithms to assist in the responses of a chatbot could lead to replacing judgement on situations with the chatbots’ output.  She is quoted as saying “Using chatbots in search engines . . . is a bonkers idea that everyone is now racing to do to.” 

It is too early to decide how the machine learning chatbots will evolve and assist in developing new materials or technologies in the nano realm.  In the late 1990s, Text Mining was the next computer driven technology that would provide a very widespread application.  It has evolved to applications that are focused, e.g., evaluating customer databases to determine produce or service issues or similar evaluation of structured word evaluation.  A report from Stanford states: “A lot of inefficiencies and errors that happen in medicine today occur because of the hyper-specialization of human doctors and the slow and spotty flow of information” [Ref. 9].  Hopefully, nanotechnology will witness something that can evaluate research similarities and provide a database as appears to be happening in medicine that researchers nano realm can utilize to move toward the future more quickly.  Chatbots can apply to nanotechnology given the proper access to relevant data. 

References:

  1. Top 25 Chatbot Case Studies & Success Stories in 2023 https://research.aimultiple.com/top-chatbot-success/
  2. https://www.pcmag.com/news/chatgpt-alternatives-ai-chatbots-ready-to-answer-your-burning-questions
  3. https://www.digitaltrends.com/computing/best-ai-chatbots/
  4. The AI Will See You Now, Wall Street Journal, Saturday, 04/08/2023 Page .C001
  5. https://decrypt.co/126122/meet-chaos-gpt-ai-tool-destroy-humanity
  6. https://www.innovating-automation.blog/why-a-sawmill-needs-automation/
  7. Https://chatgpt.pro
  8. Weekend Confidential with Timnit Gebru, Emily Bobrow, Wall Street journal, Saturday, 02/25/2023 Page .C006
  9. https://news.stanford.edu/press-releases/2023/04/12/advances-generalizable-medical-ai/
Technology

New transistors in the nano realm

The current structures for semiconductor central processing units (CPUs) are being designed and produced with some dimensions in the single digit nanometer realm.  Beside being hard to make, there are material challenges.  When one wants to build a structure, whether it is a large building of a very small line, the roughness (irregularities) in the edges are an issue.  Bricks can be off a little with respect to each other and still create the appearance of a straight line.  But, if there were large stones on a small wall, the appearance would be very obvious.  As the size of the lines and objects get smaller and smaller, the molecules that create the structure can be large enough to irregularities in the structure, which can create issues with the electrical properties of the devices. 

Smaller structures would appear to be able to be created faster and with less energy.  If the structures need to be more precise in alignment and reduction in irregularities, fast exposures might not be the best approach.  There are variations in the energy beams doing the exposure.  Everything needs to be uniform.  One method is to increase the energy required to form the image of the structure, which means making the imaging material less sensitive.  This requires a balance of image structure formation and overall throughput of the equipment, which implies greater energy needed for manufacturing.  This raises the possibility of needing new materials and new structures.

Reference 1 is from a business and technology guru, George Gilder.  He mentions that Huawei has patented a graphene transistor.  (Other companies have patented different ideas and structures.)  He states: “Huawei’s breakthrough is deeply impressive.  Because graphene is a supreme conductor of both heat and electricity, graphene transistors may operate at 10 times, or more, the speed of silicon devices, using perhaps less than a tenth of the power. . .  graphene conducts electrons with minimal resistance and graphene transistors need far less power than silicon to switch on and off. But they will be slow no longer, switching at least an order of magnitude faster than silicon. And as a “two dimensional” (i.e., one atom thick) material graphene circuits could function with only atomic distances between them.”

There is continuing development in the area of metamaterials.  Engineers from CalTech and ETH Zurich created a method to design metamaterials using quantum mechanics principles.  [Ref. 2] Work has been done on bending electromagnetic waves.  An earlier set of blogs have described the impact of metamaterials designed for specific purposes.  This team approached the design of metamaterials based on quantum theory.  The researchers realized that “quantum mechanics predicts the existence of certain exotic types of matter: among them, a ‘topological insulator’ that conducts electricity across its surface while acting as an insulator in its interior.   They realized that they could build macro-scale versions of these exotic systems that could conduct and insulate against vibrations instead of electricity by using principles of quantum mechanics.”

Given that it is possible to create metamaterials , how does this relate to semiconductors.  As mentioned, creating means of focusing and bending light, open the possibility of creating optical connections in the semiconductor device.  Optical waves can move faster than the electrons.  This creates increased speed and a lower energy level, which means less energy loss , which would have become heat.  So, it would be faster and use less power.  What about the transistor itself?  With the ability to create metamaterials that function in very different ways, the design tools are coming that could provide the ability to design a new functioning “transistor”.  It still needs to be invented.  Coming soon?

References:

1.       https://www.gilderreport.com/when-we-have-to-beg-the-chinese-for-their-technology-will-the-china-hawks-lead-the-delegation/

2.       https://www.sciencedaily.com/releases/2018/01/180118100819.htm

Electronics, Metamaterials, Nanotechnology

More nanomaterials, or is it metamaterials, or semiconductors?

Last month’s blog covered changes that are coming in the structure of semiconductor transistors.  With the announcement of 3nm devices [Ref. 1}, there is no uncertainty that the structures are being designed in the small nanoscale region.  As this shrinkage continues, the application of 2 dimensional materials (2-D) is increasingly important.

Energy efficiency of devices is important in order to continue miniaturization of devices and provide improved performance.  A devices, like phones, add more functions in roughly the same form shape and size, the ability to have longer battery life (more power for a longer time) is more important.  This same ability is key for creating electric vehicles (EVs) with increased travel range.   Solutions are developed, but the process sometimes can take a long time. 

A paper [Ref. 2] in 2015 describes work started in 2011 on improved layering methods to produce 2-D materials with interesting properties.    Work has been focused on transition metal dicho9cogenides.  This examines the properties of the combination of one of the 15 transition metals (Molybdenum, Tungsten, etc.) with one of the chalcogen family (sulfur, selenium, and tellurium).  At that time, there were hopes to develop a combination that could be employed in place of silicon.  The work in 2015 expanded the original possibilities by sandwiching a transition metal, like titanium, between monoatomic layers of another metal and employ carbon atoms to bind the layers together and produce a stable material.  The key to their success is the discovery of a material called MAX phase.  (M is for the transition Metal, A is for “A group” metals, and the X is for carbon and/or nitrogen.)  This terminology was based on material developed in 2011 and called “MXene”, which is based on the process of etching and exfoliating atomically thin layers of aluminum from MAX phases.

Figure 1 is from the authors paper [Ref. 3] showing the detailed structure of the MXenes.

“Schematics of the new MXene structures. (a) Currently available MXenes, where M can be Ti, V, Nb, Ta, forming either monatomic M layers or intermixing between two different M elements to make solid solutions. (b) Discovering the new families of double transition metals MXenes, with two structures as M′2M″C2 and M′2M″2C3, adds more than 20 new MXene carbides, in which the surface M′ atoms can be different from the inner M″ atoms. M′ and M″ atoms can be Ti, V, Nb, Ta, Cr, Mo. (c) Each MXene can have at least three different surface termination groups (OH, O, and F), adding to the variety of the newly discovered MXenes”.  [Ref. 3]

Fast forward to 2023, a report [Ref. 4] describes the anticipated advantages of the MXenes in a number of potential applications.  MXenes are produced as nanometer thick flakes, which can be dispersed in water or other solution and applied to surfaces.  Work has been done to create a supercapacitor and apply it to fabric.  The material has been demonstrated to be able to power a 6-volt device for over an hour.  While this seems promising as a replacement of lithium-ion batteries, there are issues.  The MXenes tend to oxide and degrade in normal conditions.  A solution has been demonstrated employing high frequency acoustic waves to remove the rust.  It is a fast process that is repeatable.  The contention is that the MXenes created for this purpose have four times the storage density of Lithium-ion batteries. 

Additional work on employing MXenes, with characteristics favorable to sensors, for medical purposes, such as detecting cancer.  Combining the MXenes with a gold nanoarray provided a base for in -situ testing.  Adding specific biosensors for identifying specific biomarkers has been demonstrated.  This is an early effort to improve detection of specific cancers.  There is much research still required to develop this as a usable device.

While there has been a number of promising applications, the immediate availability is not happening.  One of the reasons is that there is no source for a consistent supply of the material.  The challenge will be to develop processes that provide consistent high-grade material.  Until then, MXenes are a great material for developing materials that could have breakthrough results. 

References:

  1. https://auto.economictimes.indiatimes.com/news/auto-components/tsmc-begins-pilot-production-of-3nm-chips/88071568
  2. https://spectrum.ieee.org/why-mxenese-matter
  3. https://pubs.acs.org/doi/full/10.1021/acsnano.5b03591
  4.  https://spectrum.ieee.org/new-method-for-layering-2d-materials-offers-breakthrough-in-energy-storage
Metamaterials, Nanotechnology, Semiconductor Technology

A New Year, New Opportunities

Semiconductors are always in the news as the drive for greater capabilities in ever decreasing package sizes.  With the announcement of 3nm node devices going into production, the challenges continue to increase.  TSMC has started a pilot production of 3nm devices [Ref. 1].  The shrinkage in size is at the point where there are needs for new transistor designs.  The direction appears to be moving from FinFET to a Gate-All-Around FETs design.  Figure 1 is from reference 2.

Obviously, there are manufacturing challenges.  One of the proposed GAA FET designs is based on nanosheets of material.  It appears that some manufactures will introduce the nanosheet FET at 3nm and others at 2nm.  (More details on the development of the FETs can be found in reference 2.)

Research at Tsinghua University  in Beijing, China have developed a transistor with atomically thinned channels that have a gate length of 0.34nm.  [Ref. 3 & 4] This is still years from manufacturing possibilities if it happens at all.  Multiple technologies are developed, but a very limited number are able to be developed where the process would work in volume manufacturing.  However, this work indicates there a possibilities for continued reduction in the size of the transistors.

Researchers at Georgia Tech, Tianjin University, and Kwansei Gakuin University have demonstrated a nanoelectronics platform based on graphene [Ref. 5].  The process employs e-beam lithography to connect the edges to silicon carbide devices.  If oxygen can connect to the graphene, it becomes graphane, which is an insulator. 

There are other options for improving the performance capabilities of the semiconductor devices.  Chiplets [Ref. 6] are small elements of a circuit that can be employed across a large variety of devices.  The advantages of chiplets include the ability of co-locating processors with memory immediately adjacent.  This reduces the time it takes a signal to move to or from memory, which results in improved performance.  But, nothing is without challenges.    Reference 7 covers the need for heterogeneous integration to create multi-die packages.  The advantage of smaller area die/chips provides the ability to increase yields due to less complex individual semiconductor functions. 

There is another consideration when stacking chiplets.  A single semiconductor die is built into a package that dissipates heat to keep the device temperature from becoming too hot.  Stacking one or more chiplets removes these portions of the circuitry away from a heat sync.  The buildup of heat will impact performance and could have an adverse impact on long-term device reliability.

All of the efforts within the semiconductor industry and researchers worldwide has coordination.  In the 1990s, the International Technology Roadmap for Semiconductors (ITRS) was developed to provide guidance for researchers to address future needs that would be required to be in production over the next 10 to 15 years.  This roadmap was updated annually.  The roadmap committee restructured the ITRS format to address seven different technology areas.  The roadmap was renamed the International Roadmap for Devices and Systems (IRDS) to more appropriately address the needs of the complete process.  The responsibility of the IRDS was moved from the roadmap committee to the Institute of Electrical and Electronic Engineers (IEEE).  The focus of the IRDS is still the requirements for the next fifteen years on a continually moving basis.  More details and the roadmap are available is reference 8.

Changes are coming to semiconductor technology that will improve the performance of devices and create new opportunities for innovate products that require greater computing power.

References:

  1. https://auto.economictimes.indiatimes.com/news/auto-components/tsmc-begins-pilot-production-of-3nm-chips/88071568
  2. https://semiengineering.com/new-transistor-structures-at-3nm-2nm/
  3. https://www.tomshardware.com/news/semi-transistors-atom-thick
  4. https://www.nature.com/articles/s41586-021-04323-3
  5. https://www.graphene-info.com/researchers-take-step-towards-graphene-electronics
  6. November 2022 Blog http://www.nano-blog.com/?m=202211
  7. https://semiengineering.com/heterogeneous-integration-co-design-wont-be-easy/
  8. https://irds.ieee.org 
Semiconductor Technology

Another year of interesting developments

Energy from Fusion – As reported in Phys.Org [Ref. 1] a breakthrough announcement in fusion came from Lawrence Livermore National Labs.  For the first time an experiment in fusion created more energy that was required to generate the fusion.  Unlike nuclear reactors which employ radioactive materials to generate power, fusion does not have radioactive materials involved.  The process is often described a how the sun creates its energy.  The idea behind the decades of research is that fusion energy would be clean (green) and a reliable source.

What is the fusion process?  The effort on developing fusion energy employs hydrogen.  (A little chemistry needed.)  Hydrogen in its most abundant form consists of one proton and one electron.  There are two other forms of hydrogen.  Deuterium, which consists of one proton, one electron, and one neutron, and Tritium, which has two neutrons.  What is necessary to create energy is to combine one atom of Deuterium with one atom of Tritium.  The result of this combination is one atom of helium, one neutron, and energy [image from Ref. 2].

The challenges for this process include the fact that the temperature required is millions of degrees at pressures that force the hydrogen atoms together.  Normally, hydrogen atoms would repel each other.  In these conditions, hydrogen becomes a plasma and not a gas.  This requires constraining the plasma, which requires a magnetic bottle.  Making this happen pushes the state of current technologies.  The current experimental construction employs 192 very high power lasers focusing on a very tiny spot.  In that tiny spot is a small pellet containing the two different isotopes of hydrogen.  The entire container is roughly 2,000 microns.  There has been some information that the successful test used a modified shell thickness.  There is a reasonable probability that the modifications in the thickness were in the nanometer range.

This effort was a proof of concept.  It is estimated that the energy produced for a fraction of a second was greater than all the power generation in the world for that same time.  This is only the next step in the development of fusion power.  But, it is a significant step proving that power can be generated.  Not everyone agrees with fusion power.  Reference 3 has opposing opinions.

Graphene highlights – Graphene has dropped from view as it becomes more integrated into everyday products.  However, there are developments occurring.  Researchers at the National Graphene Institute employed graphene as an electrode to measure electrical force applied to water and the resultant rate of separation [Ref. 4].  Developing these parameters should permit the improvement in being able to extract hydrogen from water.  Hydrogen for a fuel is being explored by a number of companies.

Researchers at Northeastern University (Boston) and University of Texas at Arlington have developed a process to measure the topmost atomic layer of materials. [Ref. 5]  They have identified the process as an auger-mediated positron sticking (AMPS).  A key element is that when positrons change from vacuum state to surface bound, the state change excites electrons into the vacuum.  More detail is available in the reference. 

A nano-based electronics platform has been developed by researchers at Georgia Tech, Tianjin University, and Kwansei Gakuin University. [Ref. 6] The premise is that this graphene-based platform is compatible with conventional, silicon semiconductor technology.  The thought is that using this new platform for electronics will produce smaller and stronger circuity.

Scientific Integrity – Unfortunately, this is a topic that has been covered multiple times over the last few years.  Some times research results have erroneous conclusions, incomplete data, or “fitting” data attempting to prove a hypothesis.  The most recent report, published December 15, 2022, [Ref. 7] that has raised controversy is from the Agency for Healthcare Research and Quality, which is part of the US Department of Health and Human Services.  Their work estimates there are 130 million emergency department visits per year within the United States.  Using an average of 25,000 visits per department results in approximately 5,200 emergency departments in the United States.  Their report indicates there are 50 deaths per facility per year or 260,000 deaths per year in emergency departments due to diagnostic errors!  These conclusions were republished in late December 2022 by the DC Medical Malpractice & Patient Safety Blog [Ref. 8], on Twitter [Ref. 39 by the New York Times, and in the UK Daily Mail [Ref. 10].  This information is spreading world-wide.

There is a problem with this report that relates to scientific integrity.  An article published in the Wall Street Journal, December 30, 2022, [Ref. 11] raises questions about the accuracy of the report conclusions.  That article points out that the implication that one out of every 500 patients die of physician error.  It questions the statistical method employed.  The total number of patients in the study was 503 with one patient dying due to a delayed diagnosis by an ER physician.  This article also points out that the study was focused on all kinds of medical errors and not specifically to estimate death rates from erroneous or late diagnoses.  The sample size was insufficient to allow that type of a conclusion. 

The result is that a report with shocking results, ¼ million emergency room deaths due to physician error, raises questions about the medical profession.  When errors in the process demonstrates that the conclusions are inaccurate, it raises questions about the researchers and their organizations.  As a side issue, the study was made in Canada using Canadian emergency room data.  How does this demonstrate what happens in the United States?  There have not been any clarifications issued as of the date of this blog.  When issued, any clarification can not undo the damage of the published data in various media. 

References:

  1. https://phys.org/news/2022-12-nuclear-fusion-scientists-latest-breakthrough.html  
  2. https://www.energy.gov/science/doe-explainsnuclear-fusion-reactions
  3. https://www.counterpunch.org/2022/12/28/nuclear-fusion-dont-believe-the-hype/  
  4. https://statnano.com/world-news/96864/Researchers-use-graphene-to-measure-the-properties-of-a-material%E2%80%99s-surface-layer
  5. https://www.graphene-info.com/researchers-use-graphene-electrodes-split-water-molecules 
  6. https://www.graphene-info.com/researchers-use-graphene-electrodes-split-water-molecules 
  7. Diagnostic Errors in the Emergency Department: A Systematic Review  https://effectivehealthcare.ahrq.gov/products/diagnostic-errors-emergency/research
  8. https://www.jdsupra.com/legalnews/misdiagnoses-lead-to-250-000-er-8669286/ 
  9. https://twitter.com/nytimes/status/1604012290734522368
  10. https://www.dailymail.co.uk/health/article-11546585/ER-misdiagnoses-kill-quarter-million-Americans-year.html
  11. https://www.wsj.com/articles/false-alarm-about-emergency-rooms-ahrq-physicians-er-misdiagnoses-mortality-rate-us-canada-trust-11672136943
Science

Chiplets – What are semiconductor chiplets and why needed

As there are more references to the development of semiconductor chiplets, it might be useful to consider what they are and why they are needed. As the size of the semiconductor features shrink, more and more transistors can be packed into the same area. This increase in density of features permits the continual progress following the predictions of Moore’s Law. There are consequences to this increase in density. One is that as more and more capabilities are added, there is a need to provide additional means of having the device’s functions available to the external world. This necessitates the increase in the number of input/output (i/o) connections for the device. There is a limit on how dense the spacing of these i/o points are due to the need to be able an connect to each and every point to additional circuitry without misconnecting the device.


The solution to the issue of handling the interconnects has seen a number of answers. Incorporating newer (custom designed?) materials and developing customized techniques including 2.5D, 3D-IC, wafer level packaging, and system-in-package [cf. Ref. 1 for additional details] are among the possibilities. There are among the new ideas for developing interconnects and other possibilities to output signals and information.


The larger influencing factor is the cost to implement and manufacture. According to Reference 2: “There are fewer customers at 5nm than there were at 7nm, and there were fewer at 7nm than at 10nm, because a smaller number of companies can extract value from the large capital investments needed to develop these new products.” The issue is funding. Any design needs to be able to provide a return on the development and production costs. The author has heard of the design costs for a leading-edge device to be as high as $100M or more! That also indicates the hours to develop all aspects of the design, including the tooling needed for manufacturing. What is an acceptable defect rate for 100 million transistors on a single device becomes a disaster then there are 10 of billions of transistors on the device.


A solution is needed. Enter the concept of the “chiplet”. As Reference 3 explains, a chiplet is a sub-device item that provides certain predetermined functions. One example is that the chiplet could be the fully operational specialized timing circuit. If this concept moves forward, and it appears to be doing so, there will be libraries of function designs that can be selected from to perform specific actions/calculations. These chiplets can be packaged and mounted, directly mounted to the wafer (similar to flip chip assembly on the printed wiring board level), or wafer segment to wafer level bonding. The effort to create these new capabilities will not be easy. Several major manufacturers have created a consortium [Ref. 4] to standardize the specifications and capabilities of chiplets.


The question is why are these needed. Reference 5 describes the need to faster computing power. The latest exascale super computer CPU and GPU designs mix and match complex chip functions in advanced packages. These computers will be 1,000 faster than the existing super computers. “That’s beginning to change. Some, but not all, exascale supercomputers are using a chiplet approach, particularly the U.S.-based systems. Instead of an SoC, the CPUs and GPUs in these systems incorporate smaller dies or tiles, which are then fabricated and reaggregated into advanced packages. Simply put, it’s relatively easier to fabricate smaller dies with higher yields than large SoCs.”


On the smaller scale, the chiplets can provide time-saving designs that can produce devices in very small packages. The medical community benefits from smaller devices, especially in implanted devices. Typically, smaller devices will require less power, which in turn provides a longer battery life. In other cases, the ability to reduce the size of the device enable applications that are not currently possible. Will everything go to chiplets, probably not. Advanced capability devices can benefit from more efficient packaging, which chiplets appear to provide. The future will inform us of how effective chiplets can be.
References:

  1. https://semiengineering.com/knowledge_centers/packaging/advanced-packaging/chiplets/
  2. https://semiengineering.com/scaling-advanced-packaging-or-both/
  3. https://semiengineering.com/paving-the-way-to-chiplets/ https://gildersdailyprophecy.com/posts/wafer-scale-integration-is-underway
  4. https://www.designnews.com/electronics/tech-giants-form-consortium-standardize-chiplet-interfaces
  5. https://semiengineering.com/chiplets-enter-the-supercomputer-race/?cmid=27ae90f7-287c-484e-b4b9-4299ffd5c533
Semiconductor Technology

Where Wafer Scale Integration Came From

This is a story that starts in the early days of electronics.  As new concepts were developed for applications employing electricity, there was a need to develop a means of assembling components into completed electric circuits.  The “breadboard” was developed.  It was a non-conducting material with holes punched through the material in regular rows and columns.  By inserting components through the holes and soldering wires, circuit connections could be created.  Vacuum tube electronics were able to work as a means of controlling the flow of electricity through the circuit.  Obviously, this process was not viable for consumer products, which needed to be manufactured in volume.  Vacuum tube electronics date to the early 1900s and were used in sound recording and reproduction.

Printed wiring boards (PWBs) (also known as printed circuit boards (PCBs)) were developed.  Copper patterns were created on the insulating substrate (board) and holes drilled with components were to be inserted.  After the components were inserted, the PWB with components (resistors, capacitors, inductors, vacuum tube mounts, connectors, etc.) was passed through a soldering machine.  This machine had molten solder (primarily a tin-lead composition) in a large tank/bath.  A standing wave was created and the PWB passed over the standing wave just touching the component-board surface.  This created an assembly with the components firmly attached to the board.  Connectors permitted tying additional PWBs together to create the desired electrical system.

As system became more complex, the quantity of PWBs to create the desired system became very large and the number of vacuum tubes per PWB increased.  The weak link in these systems was the vacuum tubes.  Their life span was quite variable and when there were a large number of vacuum tubes involved, the system reliability was poor.  Companies that required high reliability of available functioning time, needed to find a better solution. 

Among those companies in need was AT&T.  Their Bell Labs was given the task to develop  a reliable substate.  The vacuum tube switching circuits were constantly needing to have vacuum tubes replaced.  In addition, vacuum tubes take time to “warm” up to function properly.  While the tubes are functioning, they ae generating heat.  Heat is a source of their failure.  In December 1947 (an interesting year for other reasons), Bell Labs researchers demonstrated a signal output increase when two gold contacts were applied to a germanium substrate. [Ref. 1]  The first demonstration of the transistor effect.  The development of the transistor grew rapidly.  In the late 1950s, Jack Kilby (Texas Instruments) developed a memory cell, which was a combination of various transistors on a single substrate.  Shortly after this development, Robert Noyce created a planar circuit that had the interconnections (wires) integrated into the surface of the substrate.   It was an integrated circuit (IC).

Fast forward to the early 1970s and Intel developed a 4 bit microcontroller, which was rapidly followed by 8 bit and then 16 bit microcontrollers. [Ref. 3] The advantage of the microcontroller was that it provided a means of changing the function of the circuitry without having to physically change the actual circuit.  The need for additional functions in the circuitry has led to a continual growth in complexity of the circuits.  This challenge has led to the continual development of greater and greater number of features on the IC.  In order to accomplish this, smaller and smaller features were continually developed.  (cf. Moore’s Law Ref. 4 for more details.)

The current, newer and more capable devices with billions of transistors have a limit due to the number of output connections for the devices.  These ICs, like all the previous ones, are mounted on PWBs for interconnections.  There is time required for the electrical signals from one IC to travel to the PWB interconnection, traverse the PWB circuit lines, and then enter the desired IC.  While these times are a fraction of a fraction of a second, these times delay the processing. 

A possible solution is to create the desired circuit on a single silicon wafer instead of using multiple types of ICs attached to a PWB.  This solution is called wafer scale integration.   Researchers from UCLA have proposed “packing dozens of servers’ worth of computing capability onto a dinner-plate-size wafer of silicon.” [Ref. 5] There are a number of challenges and some ideas on how to solve this need.

Next month’s blog will discuss “Chiplets”.

References:

  1. https://en.wikipedia.org/wiki/Transistor
  2. https://gildersdailyprophecy.com/posts/wafer-scale-integration-is-underway
  3. https://en.wikipedia.org/wiki/Transistor
  4. Moore’s law – Wikipedia
  5. https://spectrum.ieee.org/goodbye-motherboard-hello-siliconinterconnect-fabric#toggle-gdpr
Semiconductor Technology

Advancements for Nano and Below

Previous blogs have mentioned the need for improved tools and the goal of at least one order of magnitude greater capability than the objects attempting to be evaluated.  That development is beginning to become available. The May/June issue of Photonic Focus (page 15) has a summary of a paper [Ref. 1] that addresses enhanced resolution of x-rays.  As mentioned in a few blogs including last month’s, the resolution limit of light is determined by its wavelength.  The shorter the wavelength, the smaller the image that can be resolved.  That is one of the reasons that the semiconductor industry invested so much time and funds in getting EUV lithography to work.  With a wavelength of 13.5nm, the limit for just resolving images is only a few nanometers.  The article mentioned refers to an “Achromat”, which is an optical device that separates a light beam and then recombines the images to produce the results.  The results are sharp optical images in photography and microscopy.  Still size limited by Snell’s Law.  The article mentioned describes a similar type arrangement at the Achromat, but employs x-rays as the source.  The proof-of-concept microscopy employed a synchrotron.  Not practical for most cases.  However, the initial efforts in EUV Lithography were built on X-ray Lithography which employed a synchrotron.  Something to keep an eye on for developments.

The second reference describes a method of improving the stability and imaging time of Tip-Enhanced Raman Spectroscopy (TERS).  One of the challenges that drove this development is the need for developing minute details across large samples.  (Large in this case is micron sized surfaces.)  Why is this important?  As the size of the materials that are being employed in specific applications shrinks, the need to guarantee that the surface is exactly as specified becomes critical.  From experience, I know of a superior sensor that was developed about 15 years.  It employed sheets of graphene.  That never became a product due to the fact that the graphene had random defects, and there was no instrumentation available to detect the defects.  The need for tip enhancement is due to the fact that in the typical Raman Spectroscopy process for surfaces that can be deformable, the stability time is on the order of milliseconds.  The authors have developed the methodology to permit longer imaging times with increased scanning areas, and better resolution.  Obviously, much more work is required, but a direction has been demonstrated for additional efforts.

The following is about work performed at Duke University [Ref/3].  Their work has been referenced with respect to metamaterials in blogs on optics.  To fully appreciate the work they have done, it is necessary to explain some of the background on their work.   In order to control light, it is necessary to create structures that enable conditions that can be considered negative indices of refraction.   When lights impinges on a metasurface, the light frees electrons in the metal so that it creates an oscillation.  With the appropriate structure the light is effectively absorbed.  Their efforts have trapped light beneath the surface.  These metasurfaces consist of a base metal layer with a nanometer layer of transparent material in specific shapes.  The top of this three layer structure is a lay of silver nanocubes.  The entire structure is only several nanometers thick.  Colloidal chemistry  enables the ability of synthesis of shaped nanomaterials across large areas – even wafer sized ones.  Their efforts inverted the layers and created nanosized indents in the surface.  This process permits the construction of different sized/shaped increases the wavelengths that can be modified with one structure. 

Tools and processes are being developed to work at scales that we impossible to even observe only one or two decades ago.  There is interesting work [Ref. 4] on using the chirality of materials to enable an entirely new field for controlling the properties of electromagnetic waves.  That is left to the reader to explore if interested.

References:

  1. A. Kubec, et al., Nat. Comm., 2022, doi: 10.1038/s41467-022-28902-8)
  2. https://www.laserfocusworld.com/science-research/article/14281057/improving-the-stability-and-imaging-time-of-ters
  3. https://www.laserfocusworld.com/science-research/article/14280354/plasmonic-metasurface-fab-process-flip-expands-its-wavelength-range
  4. (Xu et al., Adv Photon., 2022, doi: 10.1117/1. AP.4.4.046004)
Nanotechnology

Metamaterials – Optics

In June’s blog, invisibility cloaking was covered.  While that is a type of optical metamaterial, the advantage of optical metamaterials is that they can provide the ability to extend the range of traditional optics.  (While the work has been ongoing for years, there is not the impact of the invisibility cloak.) There is a rule based on the wavelength of light called Rayleigh’s Limit that provides the limit of the smallest objects that can be defined.  Blue light is in the range of 450nm and green light is 550nm.  The Rayleigh Limit predicts that the smallest separation between two points that can be detected is 56nm using blue light and 69nm using green light.  This is the theoretical limit at which two points can be separately identified by perfect optics.  This is not the minimum that structure can be identified, which is much larger.

So what is the big deal?  Semiconductors devices have features in the low nanometer range and they can be inspected.  Yes, features smaller than 10 nm can be visualized employing various types of electron microscopy because the material being “viewed” is a solid surface.  The limitations of optical microscopy have the greatest impact on biological work. Many of the investigations in this filed work with objects that are small, transparent, and have little contrast difference in the object.  This includes viruses and DNA molecules.  Bright field microscopy limits the resolution to approximately 200nm. [Ref. 1] 

The challenges in manufacturing the optical metamaterial are a combination of both finding the proper materials to create a negative index of refraction and creating layers of the required thickness to become a metamaterial.  Work done and published in 2007 [Ref. 2] indicated that using positive and negative layers of refractive index material, they were able to achieve a resolution of 70nm.  Work presented in Reference 3 provides more information on the state of the effort in 2014.  “By using 15 nm TiO2 nanoparticles as building blocks, the fabricated 3D all-dielectric metamaterial-based solid immersion lens (mSIL) can produce a sharp image with a super-resolution of at least 45 nm under a white-light optical microscope, significantly exceeding the classical diffraction limit and previous near-field imaging techniques.”  Additional work in 2016 [Ref. 4] demonstrated 3D resolution of sub 50nm across the plane and 10nm in depth.  Current research efforts include the application of metamaterials and the inclusion of immersion techniques.

The focus of the metamaterial enhanced lenses is to provide a better understanding of the interaction of biostructures that are beyond the limit of optical microscopy.  The challenges moving forward are numerous.  The application of various layers to create the negative index is dependent on the material being employed and achieving the proper thickness of each layer.  Defects in the layers reduce the resolution of the image.  Fortunately, the production of precise layer thickness can be accomplished by available tools.  Atomic Layer Deposition (ALD) is available with existing semiconductor manufacturing tools.  Even the ability to create structures can be accomplished with existing tools.  The question that remains is how small a dimension will be able to be analyzed optically.  Progress is needed to advance biological/medical research.

References:

  1. https://en.wikipedia.org/wiki/Superlens
  2. https://ui.adsabs.harvard.edu/abs/2007Sci…315.1699S/abstract
  3. https://research.bangor.ac.uk/portal/files/20635555/Bing_Yan_PhD_2018.pdf
  4. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4840372/
Metamaterials

Metamaterials – Acoustic

Metamaterials are usually identified by indicating it is a constructed material that has characteristics not observed in nature.  As the last blog demonstrated, metamaterials at the nanoscale can be designed to work well in the optical and near-optical portion of the electromagnetic spectrum.  There are also metamaterials that are on the millimeter scale.  This blog will cover modification of sound waves with metamaterials.

 A very generalized statement on metamaterials in the frequency domain is that it is possible to position material in such a manner as to inhibit the flow of the energy.  As with the optical examples in the last few blogs, it is possible to create structures that act as if they have negative properties of the normal material.  An interesting presentation [Ref. 1] provides a high level of various means of altering material properties in order to impact the acoustic (sound) waves. 

The function of the metamaterial to modify acoustic waves is created by developing geometries that create phase delays in portions of the wavefront that causes a cancellation of the wave front.  Figure 1a depicts a sinusoidal wave in red.  Figure 1b shows the addition of a second wavefront in blue that is precisely out of phase with the first wave.  The net result is the straight green line that is the sum of the two waves.  More details, including the mathematics involved, are presented in Reference 2

So the question is “What can this type of device actually accomplish?”  Typically, large and heavy materials are employed to mitigate noises.  The typically “sound” barriers along the sides of expressways in the cities are a prime example of that approach.  But, for individuals who need shielding working in a noisy environment, heavy and bulky is not a solution.  Reference 3 presents work on a smaller, not large roadside structures, scale that was performed in Professor Xin Zhang’s Boston University’s lab. 

So, if one has the mathematics background and the computing power to calculate the desired structure shape, how do you prove it works.  The researchers decided to create a structure that would cancel the sound from a loudspeaker.  Figure 2 shown below is a picture of the experimental setup.  The video link to its operation is Reference 4.

Figure 2

The bottom portion of the Figure 2 shows the sound exiting the plastic pipe and the increase in sound, shown by the larger blue area, is obvious when the acoustic silencer is removed.  Their calculations indicate that they have reduced the noise level by 94%.

What does that acoustic “plug” look like.  The picture below, Figure 3, is the plug, which is a 3D printed metamaterial, that modifies acoustic waves in the video in Figure 2 [Ref. 4]. 

Figure 3

While this structure appears to be simple, the actual internal was not depicted in any of the articles on their work.  In order to accomplish the sound cancelling, the structure is complex.  There is a video from Duke University [Ref. 5] that provides an explanation of the modification of acoustic waves.  Figure 4 below is a picture from the video in Reference 5 with the researchers describing how the different shapes and lengths impact the sound in order to modify it. 

Figure 4

Obviously, the structure requires some serious design calculations.  The ability to change the properties of sound waves with metamaterials that are large in size is due to the fact that sound waves have wavelengths measured in inches.

It is possible that these types of devices could replace the heavy sound blocking partitions along highways with lighter and possibly less costly specifically designed to reduce traffic noises while permitting more traditional neighborhood sounds.

References:

  1. https://www.esa.int/gsp/ACT/doc/EVENTS/acoustic_workshop/ACT-PRE-0914-valencia-review_acoustic_metamaterial.pdf
  2. https://www.nature.com/articles/s41467-018-03839-z
  3. https://phys.org/news/2019-03-acoustic-metamaterial-cancels.html
  4. https://youtu.be/Fd1D42dVxS0
  5. https://youtu.be/pNsnvRHNfho
Metamaterials