Another year of interesting developments

Energy from Fusion – As reported in Phys.Org [Ref. 1] a breakthrough announcement in fusion came from Lawrence Livermore National Labs.  For the first time an experiment in fusion created more energy that was required to generate the fusion.  Unlike nuclear reactors which employ radioactive materials to generate power, fusion does not have radioactive materials involved.  The process is often described a how the sun creates its energy.  The idea behind the decades of research is that fusion energy would be clean (green) and a reliable source.

What is the fusion process?  The effort on developing fusion energy employs hydrogen.  (A little chemistry needed.)  Hydrogen in its most abundant form consists of one proton and one electron.  There are two other forms of hydrogen.  Deuterium, which consists of one proton, one electron, and one neutron, and Tritium, which has two neutrons.  What is necessary to create energy is to combine one atom of Deuterium with one atom of Tritium.  The result of this combination is one atom of helium, one neutron, and energy [image from Ref. 2].

The challenges for this process include the fact that the temperature required is millions of degrees at pressures that force the hydrogen atoms together.  Normally, hydrogen atoms would repel each other.  In these conditions, hydrogen becomes a plasma and not a gas.  This requires constraining the plasma, which requires a magnetic bottle.  Making this happen pushes the state of current technologies.  The current experimental construction employs 192 very high power lasers focusing on a very tiny spot.  In that tiny spot is a small pellet containing the two different isotopes of hydrogen.  The entire container is roughly 2,000 microns.  There has been some information that the successful test used a modified shell thickness.  There is a reasonable probability that the modifications in the thickness were in the nanometer range.

This effort was a proof of concept.  It is estimated that the energy produced for a fraction of a second was greater than all the power generation in the world for that same time.  This is only the next step in the development of fusion power.  But, it is a significant step proving that power can be generated.  Not everyone agrees with fusion power.  Reference 3 has opposing opinions.

Graphene highlights – Graphene has dropped from view as it becomes more integrated into everyday products.  However, there are developments occurring.  Researchers at the National Graphene Institute employed graphene as an electrode to measure electrical force applied to water and the resultant rate of separation [Ref. 4].  Developing these parameters should permit the improvement in being able to extract hydrogen from water.  Hydrogen for a fuel is being explored by a number of companies.

Researchers at Northeastern University (Boston) and University of Texas at Arlington have developed a process to measure the topmost atomic layer of materials. [Ref. 5]  They have identified the process as an auger-mediated positron sticking (AMPS).  A key element is that when positrons change from vacuum state to surface bound, the state change excites electrons into the vacuum.  More detail is available in the reference. 

A nano-based electronics platform has been developed by researchers at Georgia Tech, Tianjin University, and Kwansei Gakuin University. [Ref. 6] The premise is that this graphene-based platform is compatible with conventional, silicon semiconductor technology.  The thought is that using this new platform for electronics will produce smaller and stronger circuity.

Scientific Integrity – Unfortunately, this is a topic that has been covered multiple times over the last few years.  Some times research results have erroneous conclusions, incomplete data, or “fitting” data attempting to prove a hypothesis.  The most recent report, published December 15, 2022, [Ref. 7] that has raised controversy is from the Agency for Healthcare Research and Quality, which is part of the US Department of Health and Human Services.  Their work estimates there are 130 million emergency department visits per year within the United States.  Using an average of 25,000 visits per department results in approximately 5,200 emergency departments in the United States.  Their report indicates there are 50 deaths per facility per year or 260,000 deaths per year in emergency departments due to diagnostic errors!  These conclusions were republished in late December 2022 by the DC Medical Malpractice & Patient Safety Blog [Ref. 8], on Twitter [Ref. 39 by the New York Times, and in the UK Daily Mail [Ref. 10].  This information is spreading world-wide.

There is a problem with this report that relates to scientific integrity.  An article published in the Wall Street Journal, December 30, 2022, [Ref. 11] raises questions about the accuracy of the report conclusions.  That article points out that the implication that one out of every 500 patients die of physician error.  It questions the statistical method employed.  The total number of patients in the study was 503 with one patient dying due to a delayed diagnosis by an ER physician.  This article also points out that the study was focused on all kinds of medical errors and not specifically to estimate death rates from erroneous or late diagnoses.  The sample size was insufficient to allow that type of a conclusion. 

The result is that a report with shocking results, ¼ million emergency room deaths due to physician error, raises questions about the medical profession.  When errors in the process demonstrates that the conclusions are inaccurate, it raises questions about the researchers and their organizations.  As a side issue, the study was made in Canada using Canadian emergency room data.  How does this demonstrate what happens in the United States?  There have not been any clarifications issued as of the date of this blog.  When issued, any clarification can not undo the damage of the published data in various media. 

References:

  1. https://phys.org/news/2022-12-nuclear-fusion-scientists-latest-breakthrough.html  
  2. https://www.energy.gov/science/doe-explainsnuclear-fusion-reactions
  3. https://www.counterpunch.org/2022/12/28/nuclear-fusion-dont-believe-the-hype/  
  4. https://statnano.com/world-news/96864/Researchers-use-graphene-to-measure-the-properties-of-a-material%E2%80%99s-surface-layer
  5. https://www.graphene-info.com/researchers-use-graphene-electrodes-split-water-molecules 
  6. https://www.graphene-info.com/researchers-use-graphene-electrodes-split-water-molecules 
  7. Diagnostic Errors in the Emergency Department: A Systematic Review  https://effectivehealthcare.ahrq.gov/products/diagnostic-errors-emergency/research
  8. https://www.jdsupra.com/legalnews/misdiagnoses-lead-to-250-000-er-8669286/ 
  9. https://twitter.com/nytimes/status/1604012290734522368
  10. https://www.dailymail.co.uk/health/article-11546585/ER-misdiagnoses-kill-quarter-million-Americans-year.html
  11. https://www.wsj.com/articles/false-alarm-about-emergency-rooms-ahrq-physicians-er-misdiagnoses-mortality-rate-us-canada-trust-11672136943
Science

Chiplets – What are semiconductor chiplets and why needed

As there are more references to the development of semiconductor chiplets, it might be useful to consider what they are and why they are needed. As the size of the semiconductor features shrink, more and more transistors can be packed into the same area. This increase in density of features permits the continual progress following the predictions of Moore’s Law. There are consequences to this increase in density. One is that as more and more capabilities are added, there is a need to provide additional means of having the device’s functions available to the external world. This necessitates the increase in the number of input/output (i/o) connections for the device. There is a limit on how dense the spacing of these i/o points are due to the need to be able an connect to each and every point to additional circuitry without misconnecting the device.


The solution to the issue of handling the interconnects has seen a number of answers. Incorporating newer (custom designed?) materials and developing customized techniques including 2.5D, 3D-IC, wafer level packaging, and system-in-package [cf. Ref. 1 for additional details] are among the possibilities. There are among the new ideas for developing interconnects and other possibilities to output signals and information.


The larger influencing factor is the cost to implement and manufacture. According to Reference 2: “There are fewer customers at 5nm than there were at 7nm, and there were fewer at 7nm than at 10nm, because a smaller number of companies can extract value from the large capital investments needed to develop these new products.” The issue is funding. Any design needs to be able to provide a return on the development and production costs. The author has heard of the design costs for a leading-edge device to be as high as $100M or more! That also indicates the hours to develop all aspects of the design, including the tooling needed for manufacturing. What is an acceptable defect rate for 100 million transistors on a single device becomes a disaster then there are 10 of billions of transistors on the device.


A solution is needed. Enter the concept of the “chiplet”. As Reference 3 explains, a chiplet is a sub-device item that provides certain predetermined functions. One example is that the chiplet could be the fully operational specialized timing circuit. If this concept moves forward, and it appears to be doing so, there will be libraries of function designs that can be selected from to perform specific actions/calculations. These chiplets can be packaged and mounted, directly mounted to the wafer (similar to flip chip assembly on the printed wiring board level), or wafer segment to wafer level bonding. The effort to create these new capabilities will not be easy. Several major manufacturers have created a consortium [Ref. 4] to standardize the specifications and capabilities of chiplets.


The question is why are these needed. Reference 5 describes the need to faster computing power. The latest exascale super computer CPU and GPU designs mix and match complex chip functions in advanced packages. These computers will be 1,000 faster than the existing super computers. “That’s beginning to change. Some, but not all, exascale supercomputers are using a chiplet approach, particularly the U.S.-based systems. Instead of an SoC, the CPUs and GPUs in these systems incorporate smaller dies or tiles, which are then fabricated and reaggregated into advanced packages. Simply put, it’s relatively easier to fabricate smaller dies with higher yields than large SoCs.”


On the smaller scale, the chiplets can provide time-saving designs that can produce devices in very small packages. The medical community benefits from smaller devices, especially in implanted devices. Typically, smaller devices will require less power, which in turn provides a longer battery life. In other cases, the ability to reduce the size of the device enable applications that are not currently possible. Will everything go to chiplets, probably not. Advanced capability devices can benefit from more efficient packaging, which chiplets appear to provide. The future will inform us of how effective chiplets can be.
References:

  1. https://semiengineering.com/knowledge_centers/packaging/advanced-packaging/chiplets/
  2. https://semiengineering.com/scaling-advanced-packaging-or-both/
  3. https://semiengineering.com/paving-the-way-to-chiplets/ https://gildersdailyprophecy.com/posts/wafer-scale-integration-is-underway
  4. https://www.designnews.com/electronics/tech-giants-form-consortium-standardize-chiplet-interfaces
  5. https://semiengineering.com/chiplets-enter-the-supercomputer-race/?cmid=27ae90f7-287c-484e-b4b9-4299ffd5c533
Semiconductor Technology

Where Wafer Scale Integration Came From

This is a story that starts in the early days of electronics.  As new concepts were developed for applications employing electricity, there was a need to develop a means of assembling components into completed electric circuits.  The “breadboard” was developed.  It was a non-conducting material with holes punched through the material in regular rows and columns.  By inserting components through the holes and soldering wires, circuit connections could be created.  Vacuum tube electronics were able to work as a means of controlling the flow of electricity through the circuit.  Obviously, this process was not viable for consumer products, which needed to be manufactured in volume.  Vacuum tube electronics date to the early 1900s and were used in sound recording and reproduction.

Printed wiring boards (PWBs) (also known as printed circuit boards (PCBs)) were developed.  Copper patterns were created on the insulating substrate (board) and holes drilled with components were to be inserted.  After the components were inserted, the PWB with components (resistors, capacitors, inductors, vacuum tube mounts, connectors, etc.) was passed through a soldering machine.  This machine had molten solder (primarily a tin-lead composition) in a large tank/bath.  A standing wave was created and the PWB passed over the standing wave just touching the component-board surface.  This created an assembly with the components firmly attached to the board.  Connectors permitted tying additional PWBs together to create the desired electrical system.

As system became more complex, the quantity of PWBs to create the desired system became very large and the number of vacuum tubes per PWB increased.  The weak link in these systems was the vacuum tubes.  Their life span was quite variable and when there were a large number of vacuum tubes involved, the system reliability was poor.  Companies that required high reliability of available functioning time, needed to find a better solution. 

Among those companies in need was AT&T.  Their Bell Labs was given the task to develop  a reliable substate.  The vacuum tube switching circuits were constantly needing to have vacuum tubes replaced.  In addition, vacuum tubes take time to “warm” up to function properly.  While the tubes are functioning, they ae generating heat.  Heat is a source of their failure.  In December 1947 (an interesting year for other reasons), Bell Labs researchers demonstrated a signal output increase when two gold contacts were applied to a germanium substrate. [Ref. 1]  The first demonstration of the transistor effect.  The development of the transistor grew rapidly.  In the late 1950s, Jack Kilby (Texas Instruments) developed a memory cell, which was a combination of various transistors on a single substrate.  Shortly after this development, Robert Noyce created a planar circuit that had the interconnections (wires) integrated into the surface of the substrate.   It was an integrated circuit (IC).

Fast forward to the early 1970s and Intel developed a 4 bit microcontroller, which was rapidly followed by 8 bit and then 16 bit microcontrollers. [Ref. 3] The advantage of the microcontroller was that it provided a means of changing the function of the circuitry without having to physically change the actual circuit.  The need for additional functions in the circuitry has led to a continual growth in complexity of the circuits.  This challenge has led to the continual development of greater and greater number of features on the IC.  In order to accomplish this, smaller and smaller features were continually developed.  (cf. Moore’s Law Ref. 4 for more details.)

The current, newer and more capable devices with billions of transistors have a limit due to the number of output connections for the devices.  These ICs, like all the previous ones, are mounted on PWBs for interconnections.  There is time required for the electrical signals from one IC to travel to the PWB interconnection, traverse the PWB circuit lines, and then enter the desired IC.  While these times are a fraction of a fraction of a second, these times delay the processing. 

A possible solution is to create the desired circuit on a single silicon wafer instead of using multiple types of ICs attached to a PWB.  This solution is called wafer scale integration.   Researchers from UCLA have proposed “packing dozens of servers’ worth of computing capability onto a dinner-plate-size wafer of silicon.” [Ref. 5] There are a number of challenges and some ideas on how to solve this need.

Next month’s blog will discuss “Chiplets”.

References:

  1. https://en.wikipedia.org/wiki/Transistor
  2. https://gildersdailyprophecy.com/posts/wafer-scale-integration-is-underway
  3. https://en.wikipedia.org/wiki/Transistor
  4. Moore’s law – Wikipedia
  5. https://spectrum.ieee.org/goodbye-motherboard-hello-siliconinterconnect-fabric#toggle-gdpr
Semiconductor Technology

Advancements for Nano and Below

Previous blogs have mentioned the need for improved tools and the goal of at least one order of magnitude greater capability than the objects attempting to be evaluated.  That development is beginning to become available. The May/June issue of Photonic Focus (page 15) has a summary of a paper [Ref. 1] that addresses enhanced resolution of x-rays.  As mentioned in a few blogs including last month’s, the resolution limit of light is determined by its wavelength.  The shorter the wavelength, the smaller the image that can be resolved.  That is one of the reasons that the semiconductor industry invested so much time and funds in getting EUV lithography to work.  With a wavelength of 13.5nm, the limit for just resolving images is only a few nanometers.  The article mentioned refers to an “Achromat”, which is an optical device that separates a light beam and then recombines the images to produce the results.  The results are sharp optical images in photography and microscopy.  Still size limited by Snell’s Law.  The article mentioned describes a similar type arrangement at the Achromat, but employs x-rays as the source.  The proof-of-concept microscopy employed a synchrotron.  Not practical for most cases.  However, the initial efforts in EUV Lithography were built on X-ray Lithography which employed a synchrotron.  Something to keep an eye on for developments.

The second reference describes a method of improving the stability and imaging time of Tip-Enhanced Raman Spectroscopy (TERS).  One of the challenges that drove this development is the need for developing minute details across large samples.  (Large in this case is micron sized surfaces.)  Why is this important?  As the size of the materials that are being employed in specific applications shrinks, the need to guarantee that the surface is exactly as specified becomes critical.  From experience, I know of a superior sensor that was developed about 15 years.  It employed sheets of graphene.  That never became a product due to the fact that the graphene had random defects, and there was no instrumentation available to detect the defects.  The need for tip enhancement is due to the fact that in the typical Raman Spectroscopy process for surfaces that can be deformable, the stability time is on the order of milliseconds.  The authors have developed the methodology to permit longer imaging times with increased scanning areas, and better resolution.  Obviously, much more work is required, but a direction has been demonstrated for additional efforts.

The following is about work performed at Duke University [Ref/3].  Their work has been referenced with respect to metamaterials in blogs on optics.  To fully appreciate the work they have done, it is necessary to explain some of the background on their work.   In order to control light, it is necessary to create structures that enable conditions that can be considered negative indices of refraction.   When lights impinges on a metasurface, the light frees electrons in the metal so that it creates an oscillation.  With the appropriate structure the light is effectively absorbed.  Their efforts have trapped light beneath the surface.  These metasurfaces consist of a base metal layer with a nanometer layer of transparent material in specific shapes.  The top of this three layer structure is a lay of silver nanocubes.  The entire structure is only several nanometers thick.  Colloidal chemistry  enables the ability of synthesis of shaped nanomaterials across large areas – even wafer sized ones.  Their efforts inverted the layers and created nanosized indents in the surface.  This process permits the construction of different sized/shaped increases the wavelengths that can be modified with one structure. 

Tools and processes are being developed to work at scales that we impossible to even observe only one or two decades ago.  There is interesting work [Ref. 4] on using the chirality of materials to enable an entirely new field for controlling the properties of electromagnetic waves.  That is left to the reader to explore if interested.

References:

  1. A. Kubec, et al., Nat. Comm., 2022, doi: 10.1038/s41467-022-28902-8)
  2. https://www.laserfocusworld.com/science-research/article/14281057/improving-the-stability-and-imaging-time-of-ters
  3. https://www.laserfocusworld.com/science-research/article/14280354/plasmonic-metasurface-fab-process-flip-expands-its-wavelength-range
  4. (Xu et al., Adv Photon., 2022, doi: 10.1117/1. AP.4.4.046004)
Nanotechnology

Metamaterials – Optics

In June’s blog, invisibility cloaking was covered.  While that is a type of optical metamaterial, the advantage of optical metamaterials is that they can provide the ability to extend the range of traditional optics.  (While the work has been ongoing for years, there is not the impact of the invisibility cloak.) There is a rule based on the wavelength of light called Rayleigh’s Limit that provides the limit of the smallest objects that can be defined.  Blue light is in the range of 450nm and green light is 550nm.  The Rayleigh Limit predicts that the smallest separation between two points that can be detected is 56nm using blue light and 69nm using green light.  This is the theoretical limit at which two points can be separately identified by perfect optics.  This is not the minimum that structure can be identified, which is much larger.

So what is the big deal?  Semiconductors devices have features in the low nanometer range and they can be inspected.  Yes, features smaller than 10 nm can be visualized employing various types of electron microscopy because the material being “viewed” is a solid surface.  The limitations of optical microscopy have the greatest impact on biological work. Many of the investigations in this filed work with objects that are small, transparent, and have little contrast difference in the object.  This includes viruses and DNA molecules.  Bright field microscopy limits the resolution to approximately 200nm. [Ref. 1] 

The challenges in manufacturing the optical metamaterial are a combination of both finding the proper materials to create a negative index of refraction and creating layers of the required thickness to become a metamaterial.  Work done and published in 2007 [Ref. 2] indicated that using positive and negative layers of refractive index material, they were able to achieve a resolution of 70nm.  Work presented in Reference 3 provides more information on the state of the effort in 2014.  “By using 15 nm TiO2 nanoparticles as building blocks, the fabricated 3D all-dielectric metamaterial-based solid immersion lens (mSIL) can produce a sharp image with a super-resolution of at least 45 nm under a white-light optical microscope, significantly exceeding the classical diffraction limit and previous near-field imaging techniques.”  Additional work in 2016 [Ref. 4] demonstrated 3D resolution of sub 50nm across the plane and 10nm in depth.  Current research efforts include the application of metamaterials and the inclusion of immersion techniques.

The focus of the metamaterial enhanced lenses is to provide a better understanding of the interaction of biostructures that are beyond the limit of optical microscopy.  The challenges moving forward are numerous.  The application of various layers to create the negative index is dependent on the material being employed and achieving the proper thickness of each layer.  Defects in the layers reduce the resolution of the image.  Fortunately, the production of precise layer thickness can be accomplished by available tools.  Atomic Layer Deposition (ALD) is available with existing semiconductor manufacturing tools.  Even the ability to create structures can be accomplished with existing tools.  The question that remains is how small a dimension will be able to be analyzed optically.  Progress is needed to advance biological/medical research.

References:

  1. https://en.wikipedia.org/wiki/Superlens
  2. https://ui.adsabs.harvard.edu/abs/2007Sci…315.1699S/abstract
  3. https://research.bangor.ac.uk/portal/files/20635555/Bing_Yan_PhD_2018.pdf
  4. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4840372/
Metamaterials

Metamaterials – Acoustic

Metamaterials are usually identified by indicating it is a constructed material that has characteristics not observed in nature.  As the last blog demonstrated, metamaterials at the nanoscale can be designed to work well in the optical and near-optical portion of the electromagnetic spectrum.  There are also metamaterials that are on the millimeter scale.  This blog will cover modification of sound waves with metamaterials.

 A very generalized statement on metamaterials in the frequency domain is that it is possible to position material in such a manner as to inhibit the flow of the energy.  As with the optical examples in the last few blogs, it is possible to create structures that act as if they have negative properties of the normal material.  An interesting presentation [Ref. 1] provides a high level of various means of altering material properties in order to impact the acoustic (sound) waves. 

The function of the metamaterial to modify acoustic waves is created by developing geometries that create phase delays in portions of the wavefront that causes a cancellation of the wave front.  Figure 1a depicts a sinusoidal wave in red.  Figure 1b shows the addition of a second wavefront in blue that is precisely out of phase with the first wave.  The net result is the straight green line that is the sum of the two waves.  More details, including the mathematics involved, are presented in Reference 2

So the question is “What can this type of device actually accomplish?”  Typically, large and heavy materials are employed to mitigate noises.  The typically “sound” barriers along the sides of expressways in the cities are a prime example of that approach.  But, for individuals who need shielding working in a noisy environment, heavy and bulky is not a solution.  Reference 3 presents work on a smaller, not large roadside structures, scale that was performed in Professor Xin Zhang’s Boston University’s lab. 

So, if one has the mathematics background and the computing power to calculate the desired structure shape, how do you prove it works.  The researchers decided to create a structure that would cancel the sound from a loudspeaker.  Figure 2 shown below is a picture of the experimental setup.  The video link to its operation is Reference 4.

Figure 2

The bottom portion of the Figure 2 shows the sound exiting the plastic pipe and the increase in sound, shown by the larger blue area, is obvious when the acoustic silencer is removed.  Their calculations indicate that they have reduced the noise level by 94%.

What does that acoustic “plug” look like.  The picture below, Figure 3, is the plug, which is a 3D printed metamaterial, that modifies acoustic waves in the video in Figure 2 [Ref. 4]. 

Figure 3

While this structure appears to be simple, the actual internal was not depicted in any of the articles on their work.  In order to accomplish the sound cancelling, the structure is complex.  There is a video from Duke University [Ref. 5] that provides an explanation of the modification of acoustic waves.  Figure 4 below is a picture from the video in Reference 5 with the researchers describing how the different shapes and lengths impact the sound in order to modify it. 

Figure 4

Obviously, the structure requires some serious design calculations.  The ability to change the properties of sound waves with metamaterials that are large in size is due to the fact that sound waves have wavelengths measured in inches.

It is possible that these types of devices could replace the heavy sound blocking partitions along highways with lighter and possibly less costly specifically designed to reduce traffic noises while permitting more traditional neighborhood sounds.

References:

  1. https://www.esa.int/gsp/ACT/doc/EVENTS/acoustic_workshop/ACT-PRE-0914-valencia-review_acoustic_metamaterial.pdf
  2. https://www.nature.com/articles/s41467-018-03839-z
  3. https://phys.org/news/2019-03-acoustic-metamaterial-cancels.html
  4. https://youtu.be/Fd1D42dVxS0
  5. https://youtu.be/pNsnvRHNfho
Metamaterials

Metamaterials – Cloak of Invisibility details

Metamaterials are usually identified by indicating it is a constructed material that has characteristics not observed in nature.  While typically referring to nanoscale materials, there are metamaterials that are on the millimeter scale.  (That will be covered in a future blog.) 

Currently, in the public domain, there are two types of invisibility cloaks.  The one that exists in movies and a lot of videos is fake.  While the process is straightforward the effort takes some time.  The video in Reference 1 depicts how to make a video of a “invisibility” cloak.  The developer shows of background of the room he will be filming in.  He then walks into the view of the camera and picks up the “invisibility” cloak and moves it in front of him, which shows portions of the room behind him where ever the cloak is covering him (Figure 1).  The room is perfectly shown as it was in the beginning of the video. 

Figure 1

About 1:20 into the video, the developer then shows how this illusion is created.  He has a video (even a still picture would work as long as it has the same camera position) of the room.  When he walks into the viewing area, he picks up a green cloth and goes through whatever disappearing motions he will be showing.  The rest of the video shows how he can make the area with the green cloth transparent in that video.  Once that is completed, the modified video is overlaid on the background video/picture.  Where the green cloth had been on the film, the original background appears.  The more complex the background, the more the “invisibility” cloak appears to be real.  Based on examples of public domain information of actual modification of the light path, the clarity of the background indicates that things have been manipulated. 

There is a significant amount of information about the state of actual invisibility cloaks, most of it focuses on a Canadian company, HyperStealth Biotechnology Corp. [Ref. 2].  Their product is called Quantum Stealth.  The picture below is a screen capture from a video of the product [Ref. 3], which depicts a person emerging from behind the “cloak” (Figure 2). 

Figure 2

This is the latest version in a long series of inventions to improve the capabilities of the material.  So how does it work?  While the actual fabrication of the material is not available, there are certain fundamental principles that can be surmised.  In order to change the way light appears on an object, it is necessary to create something that will bend/transform light in a manner that does not occur in nature – a metamaterial. 

Reference 4 has a good explanation of how light bends going through a material (Figure 3 – time 2:10 in video), which is a lenticular lens.  Further on in the video (Figure 4 – time 3:45). The author demonstrates a “invisibility shield” created from two opposing lenticular lenses filled with water between the two lenses. 

Figure 3

Figure 4

What is noticeable is that the vertical window line has disappeared at the shield.  The property of the device is such that is takes the horizontal structure and in-effect creates the continuous horizontal structures across the “shield”. 

There is another issue that is impacted by Snell’s Law.  Different wavelengths (colors) are bent (refracted) differently based on the wavelength.  Work has been done to address both visible and light outside the visible portion to be part of the reimaging of the background.  Obviously, there have been significant improvements based on the Quantum Stealth material’s latest version.    

For those who would like a more detailed, scientific explanation, Reference 5 provides a good description with the required mathematical details.  Quoting from the beginning of Metamaterials explanation and explaining that this was first applied to the cloak:

“The immense potential of refractive index engineering in metamaterials can be further exemplified by recent progress in the field of transformation optics that enabled novel opportunities in the design of graded-index structures … The basic idea of the transformation method is that, to guide waves along a certain trajectory, either the space should be deformed, assuming that material properties remain the same, or the material properties should be properly modified.”

Remember: Metamaterials are a novel class of functional materials that are designed around unique micro-and nanoscale patterns of structures, which cause them to interact with light and other forms of energy in ways not found in nature.  The properties of these materials are directly due to the combination of various material characteristic involving both their shape and thickness. 

References:

  1. https://youtu.be/4z1yGdEPo0Q
  2. https://www.hyperstealth.com
  3. https://youtu.be/pZMyWEWHCTM
  4. https://youtu.be/TJvGOI263po 
  5. https://www.sciencedirect.com/topics/materials-science/metamaterials
Metamaterials

Metamaterials – Cloak of Invisibility

An excellent example of how metamaterials work comes from Stanford University [Ref. 1].  The concept can be described as creating nanoscale antenna to affect the performance of electromagnetic spectrum waves.  Over a half century ago, TVs had external metal antenna.  The design was for the frequencies of the various television stations.  If your reception was inadequate, changing the position of the antenna provided the opportunity to improve the signal being received.  Working with visible and infrared light, the wavelengths are roughly 1,000nm or less.  So, the work at modifying the light beam requires structures that are much smaller than the wavelength of the light.  Fortunately, there are semiconductor processes that are able to manufacture devices at a fraction of the wavelength. 

One example that is employed to explain metamaterials involves light.  When the light rays enter a material, the light is distorted by an amount that is related to the index of refraction, which is related to the amount of speed reduction of the speed of light in that material.  Metamaterials can actually have a negative index of refraction.  That means that the light will bend in the opposite direction that is observed in nature.  AND, the apparent speed of light in the material will appear to be greater than the speed of light in a vacuum!

The picture below is an example of a simple demonstration that the index of refraction of the water, which is different from that of air, creates the appearance of a displacement of the image of the pencil.  If one could create a liquid with a negative index, the pencil would be displaced in the opposite direction.

While the ability to create negative indices of refraction in liquids has not been reported, work is being done on solid materials that create the impact of a negative index.  The key to this “cloaking” device is a lenticular lens, which will be described in next month’s blog. 

What the lenticular lens does is “bends” the light outward instead of either focusing or reflecting the light.  While this appears to be promising, there are still issues.  As with an existing, all the colors (frequencies) of light do not bend at the same amount.  This is compensated in optical lenses by using different shapes and materials to compensate for the different amount of bending of the light.  One example of the difference in various colors (frequencies) of light being bent differently is a rainbow.  The color separation is due to different amounts of bending the water particles have on sunlight. 

There has been some work done on using pairs of either cell phones of iPads to create “holes” in objects/people since 2006 [Ref. 2].  The real issue is that while small “see-through” objects can be created to appear to see through an object, covering an entire, large object is not practical with this technique. 

Science moves forward finding novel means to apply new technology as it is developed.  The application of lenticular lenses permits some very interesting situations.  The picture below shows only the head and shoulders of a person [Ref. 3]. 

If one observes the picture closely, there is a slight change in the coloring and texture, which is due to the “cloaking” device.  There has been considerable efforts in creating the “invisibility” or “stealth” material that works through a reasonable portion of both the visible and infrared spectrum.  Remember the rainbow example above.  If that were to happen to the light being “bent” around the object, it would be easy to determine the attempt at cloaking. 

There is a material advertised as “Quantum Stealth” that has demonstrated a material that can “hide” large objects through light bending technology [Ref. 4].  There are videos of the performance of the material on their site.  While there have been examples of cloaking large objects since at least 2011 [Ref. 5], the ability to cover a wider spectrum of light has improved steadily.  Next month, how this metamaterial works and can be mass produced.

References:

  1. https://engineering.stanford.edu/magazine/article/what-are-metamaterials-and-why-do-we-need-them 2016
  2. https://www.flickr.com/photos/evanbooth/291214973
  3. https://www.freethink.com/technology/invisibility-cloak 
  4. https://www.hyperstealth.net/
  5. https://www.wired.com/2011/09/invisibility-cloak-tanks-cows/
Metamaterials

What comes after nanotechnology?

In many cases semiconductor industry leads the application of technology shrinkage.  Each “generation” of new devices employs smaller and smaller dimensional structures in the circuitry.  There is work being done on qualifying the processes for the 3nm and 2nm generation.  Since future shrinkage will be fractions of a nanometer, Intel Corporation has indicated it  is switching terminology from the 2nm to 20 Angstroms.  (The term Angstrom has been employed in optics for centuries.)  The next metric dimension identified is “pico” for 10-12 meters.  That scale is well into the atomic structure. 

Research has demonstrated that materials have different properties at the nanoscale from their bulk properties, which is commonly associated with the materials.  One example is that silver has anti-bacteria properties in the low double-digit nanometers.  Another example is that aluminum becomes highly reactive in the low double-digit nanoscale.

There are two phrases “floating” around in various publications:  metamaterials and mesoscale materials.  The former is human constructs of materials that are not normally found in nature and the term prominently employed in publications.  Mesoscale, as defined by the Los Alamos National Labs [Ref.1] “is the spatial scale beyond atomic, molecular, and nanoscale where a material’s structure strongly influences its macroscopic behaviors and properties.”  Stated differently, mesoscale can include the behaviour of a particular section of a large weather event, i.e., how the tornado interacts with the entire storm system, to the behaviour of a submicron section of material within the bulk material.  It is the properties of a small section of a larger system that may have significantly different characteristics.

Metamaterials is usually identified by indicating this material has characteristics that are not seen in nature.  “Metamaterials are a novel class of functional materials that are designed around unique micro-and nanoscale patterns of structures, which cause them to interact with light and other forms of energy in ways not found in nature. [Ref. 2]  The properties of these materials are directly due to the combination of various material characteristic involving both their shape and thickness. 

The ability to produce two-dimensional materials has provided the means of being able to investigate novel material properties.  The vast majority of the work in metamaterials is focused on the modifying or influencing the behaviour of electromagnetic waves.  These waves include the RF (radio spectrum), microwaves, and even infrared and visible light.  Some details on these applications and what they can accomplish will be the subject of future blogs.

The point of this blog is to focus on the application of nanomaterials to create structures that do not exist in nature.  As mentioned in previous blogs, during the Middle Ages, the artisans knew that adding certain size gold nanoparticles to the manufacture of glass would create the red color in stained glass windows.  They could not measure the particles, but probably had a process or recipe that they followed to create the desired size particle.  There are industries where ball milling is employed to provide a means of obtaining a sufficiently uniform particle size to enable the manufacture of the final product. 

Next month, the blog will start covering details of the structure of metamaterials.  One example that is employed to explain metamaterials involves light.  When the light rays enter a material, the light is distorted by an amount that is related to the index of refraction, which is related to the amount of speed reduction of the speed of light in that material.  Metamaterials can actually have a negative index of refraction.  That means that the light will been in the opposite direction that is observed in nature. 

There are many applications that are being considered.  This could be a very important field to develop novel products that can produce effects that are currently impossible.  More next time.

References:

  1. https://lanl.gov/science-innovation/pillars/materials-science/mesoscale/index.php
  2.  https://www.nanowerk.com/what-are-metamaterials.php
Metamaterials

Is there an end to transistor area density shrinkage?

Moore’s Law is quoted in many different forms.  Basically, area density is a primary focus.  If the difference between generations in a 30% reduction in dimensions, then the area for a transistor (70% by 70%) is reduced by almost 50% (length times width).  With the first comings of the 3nm generation and the 2nm (or 20 Angstrom) generation in the planning stages, is there a limit on what can be done on a single planar surface?  The different types of transistor formations have been covered in previous blogs.  The question is what comes next.   The most promising has been two dimensional materials. 

Gate length is the dimension that the electrons travel to flow in the transistor.   The “gate” is the mechanism that permits or inhibits the flow of electrons from the source (of the electrons) to the drain.  The “gate” switches on and off in response to a controlling voltage.  Below a certain dimension, about 5nm, the silicon can not effectively control the flow of electrons. 

Aware of this limit, researchers have been working with two-dimensional material.  Molybdenum disulfide has been employed in creating, working two-dimensional transistors.  (cf. February 2022 blog)  This material is three sheets of single atom material consisting of sulfur, molybdenum, and sulfur.  Work has been done to produce transistors with gate lengths of 1nm using carbon nanotubes and molybdenum sulfide.  Chinese researchers have taken this one small step further by creating a vertical structure (Ref. 1) with a gate length of 0.34nm (3.4 Angstroms).  The structure is similar to a stair step.  The surface of the stair is a single atomic layer of molybdenum sulfide on top of an insulator of hafnium dioxide.  More details in Reference 1.  The transistor effect occurs on the vertical step, which is the single layer of atoms.

There is additional work being conducted to evaluate the impact of current switching on nano scale structures.  Work has shown that there are minor changes in the gate lengths on switching.  By increasing the gate width to almost 5nm, the devices can improve the leakage situation at very small dimensions. 

Does this work imply that the end is in sight for the continual shrinkage of circuitry.  The answer is “NO!”.  Work is being done on three-dimensional circuity where additional circuitry is stacked on top of an existing layer.  3-D structures have been explored and shown great possibility.  Improving density by adding a second level of circuitry is equivalent to a dimensional reduction to 70% of the previous dimensions.  The issue with this approach is the potential for manufacturing losses due to misalignment and the additional layers of semiconductor processing.

What is starting to emerge is the creation of “chiplets”, which are small segments of circuitry that can perform one or more functions.  By combining these chiplets with other circuitry, it is possible to create unique circuits through assembling/interconnecting chiplets with other circuitry, which in effect is 3-D semiconductors.  The advantage to this approach would be higher yields and lower overall costs.  If the chiplets are thinned, the total semiconductor thickness can be controlled.

But, the story does not end there.  The development of metamaterials can provide additional options.  Next month, metamaterials will start to be explored depth.

References:

  1. https://spectrum.ieee.org/smallest-transistor-one-carbon-atom
Semiconductor Technology