What comes after nanotechnology?

In many cases semiconductor industry leads the application of technology shrinkage.  Each “generation” of new devices employs smaller and smaller dimensional structures in the circuitry.  There is work being done on qualifying the processes for the 3nm and 2nm generation.  Since future shrinkage will be fractions of a nanometer, Intel Corporation has indicated it  is switching terminology from the 2nm to 20 Angstroms.  (The term Angstrom has been employed in optics for centuries.)  The next metric dimension identified is “pico” for 10-12 meters.  That scale is well into the atomic structure. 

Research has demonstrated that materials have different properties at the nanoscale from their bulk properties, which is commonly associated with the materials.  One example is that silver has anti-bacteria properties in the low double-digit nanometers.  Another example is that aluminum becomes highly reactive in the low double-digit nanoscale.

There are two phrases “floating” around in various publications:  metamaterials and mesoscale materials.  The former is human constructs of materials that are not normally found in nature and the term prominently employed in publications.  Mesoscale, as defined by the Los Alamos National Labs [Ref.1] “is the spatial scale beyond atomic, molecular, and nanoscale where a material’s structure strongly influences its macroscopic behaviors and properties.”  Stated differently, mesoscale can include the behaviour of a particular section of a large weather event, i.e., how the tornado interacts with the entire storm system, to the behaviour of a submicron section of material within the bulk material.  It is the properties of a small section of a larger system that may have significantly different characteristics.

Metamaterials is usually identified by indicating this material has characteristics that are not seen in nature.  “Metamaterials are a novel class of functional materials that are designed around unique micro-and nanoscale patterns of structures, which cause them to interact with light and other forms of energy in ways not found in nature. [Ref. 2]  The properties of these materials are directly due to the combination of various material characteristic involving both their shape and thickness. 

The ability to produce two-dimensional materials has provided the means of being able to investigate novel material properties.  The vast majority of the work in metamaterials is focused on the modifying or influencing the behaviour of electromagnetic waves.  These waves include the RF (radio spectrum), microwaves, and even infrared and visible light.  Some details on these applications and what they can accomplish will be the subject of future blogs.

The point of this blog is to focus on the application of nanomaterials to create structures that do not exist in nature.  As mentioned in previous blogs, during the Middle Ages, the artisans knew that adding certain size gold nanoparticles to the manufacture of glass would create the red color in stained glass windows.  They could not measure the particles, but probably had a process or recipe that they followed to create the desired size particle.  There are industries where ball milling is employed to provide a means of obtaining a sufficiently uniform particle size to enable the manufacture of the final product. 

Next month, the blog will start covering details of the structure of metamaterials.  One example that is employed to explain metamaterials involves light.  When the light rays enter a material, the light is distorted by an amount that is related to the index of refraction, which is related to the amount of speed reduction of the speed of light in that material.  Metamaterials can actually have a negative index of refraction.  That means that the light will been in the opposite direction that is observed in nature. 

There are many applications that are being considered.  This could be a very important field to develop novel products that can produce effects that are currently impossible.  More next time.

References:

  1. https://lanl.gov/science-innovation/pillars/materials-science/mesoscale/index.php
  2.  https://www.nanowerk.com/what-are-metamaterials.php
Metamaterials

Is there an end to transistor area density shrinkage?

Moore’s Law is quoted in many different forms.  Basically, area density is a primary focus.  If the difference between generations in a 30% reduction in dimensions, then the area for a transistor (70% by 70%) is reduced by almost 50% (length times width).  With the first comings of the 3nm generation and the 2nm (or 20 Angstrom) generation in the planning stages, is there a limit on what can be done on a single planar surface?  The different types of transistor formations have been covered in previous blogs.  The question is what comes next.   The most promising has been two dimensional materials. 

Gate length is the dimension that the electrons travel to flow in the transistor.   The “gate” is the mechanism that permits or inhibits the flow of electrons from the source (of the electrons) to the drain.  The “gate” switches on and off in response to a controlling voltage.  Below a certain dimension, about 5nm, the silicon can not effectively control the flow of electrons. 

Aware of this limit, researchers have been working with two-dimensional material.  Molybdenum disulfide has been employed in creating, working two-dimensional transistors.  (cf. February 2022 blog)  This material is three sheets of single atom material consisting of sulfur, molybdenum, and sulfur.  Work has been done to produce transistors with gate lengths of 1nm using carbon nanotubes and molybdenum sulfide.  Chinese researchers have taken this one small step further by creating a vertical structure (Ref. 1) with a gate length of 0.34nm (3.4 Angstroms).  The structure is similar to a stair step.  The surface of the stair is a single atomic layer of molybdenum sulfide on top of an insulator of hafnium dioxide.  More details in Reference 1.  The transistor effect occurs on the vertical step, which is the single layer of atoms.

There is additional work being conducted to evaluate the impact of current switching on nano scale structures.  Work has shown that there are minor changes in the gate lengths on switching.  By increasing the gate width to almost 5nm, the devices can improve the leakage situation at very small dimensions. 

Does this work imply that the end is in sight for the continual shrinkage of circuitry.  The answer is “NO!”.  Work is being done on three-dimensional circuity where additional circuitry is stacked on top of an existing layer.  3-D structures have been explored and shown great possibility.  Improving density by adding a second level of circuitry is equivalent to a dimensional reduction to 70% of the previous dimensions.  The issue with this approach is the potential for manufacturing losses due to misalignment and the additional layers of semiconductor processing.

What is starting to emerge is the creation of “chiplets”, which are small segments of circuitry that can perform one or more functions.  By combining these chiplets with other circuitry, it is possible to create unique circuits through assembling/interconnecting chiplets with other circuitry, which in effect is 3-D semiconductors.  The advantage to this approach would be higher yields and lower overall costs.  If the chiplets are thinned, the total semiconductor thickness can be controlled.

But, the story does not end there.  The development of metamaterials can provide additional options.  Next month, metamaterials will start to be explored depth.

References:

  1. https://spectrum.ieee.org/smallest-transistor-one-carbon-atom
Semiconductor Technology

Nanoscale and semiconductors

There has been more information in the press about the coming 3nm generation of semiconductors.  [Note: I am on the semiconductor roadmap Litho subcommittee. Only information in the public domain will be covered in any of these blogs.) As pointed out in Reference 1,the continual shrinking of the dimensions has required a change in the design of the transistor.  There is a selection of the gate all around transistor (GAA) FETs as the most promising direction for the near future devices.  This will be a change from the existing finFETs currently in production.  At the 3nm or 2nm node the current configuration will cease to be a viable alternative.  As mentioned in other blogs, the semiconductor industry will be moving to the 2-D nanosheet transistor.  These Gate all Around FETs appear to provide greater performance at lower power.  This is a much needed direction.  All good things come with consequences.  The consequence of the GAA FETs is that they will be more difficult to design and produce, which implies they will be more costly.  The picture below (figure 1) is from reference 1 and depicts the evolution of the transistor. 

As the designs become complicated and include an increasing number of transistors, the design cost increase.   There is an increase cost projected in reference 1.  The average design cost for a 28nm chip is $40 Million.  A 7nm chip is estimated at $217 M and the cost of a 5nm is $416 M.    3nm is projected to be almost $600Million.  The recover of design costs requires very high volume production.  There are other designs being considered, but the GAA FET appears to be the most probable.

There is a dark side to the shrinkage of the circuitry. As frequencies increase, geometries get smaller, and voltages decrease, problems can arise.  Couplings can occur that the design modeling may miss or not be aware of.  Packaging can create its own problems.  Coupling or noise can affect the performance of the circuitry.  From personal experience, the cause can be surprising.  Had a miniaturized design of a special oscillator that was superior to the larger version and easier to manufacture.  It was required to be in a shielded container, which was as planned.  The circuitry was tested prior to the final assemble and it performed mush better than needed.  The container was sealed and the device was dead.  Unsealing the container and testing the circuitry demonstrated the device was dead.  Repeating the process on two more devices resulted in exactly the same complete failure.  The cause of the problem was eventually traced to the packaging.  The oscillator required a few inductor coils to achieve the desired performance.   Sealing the package placed two of the inductor coils close enough to each other to create a transformer, which spiked the voltage and destroyed the circuit.  Once the layer of components was redesigned, the device performed very well.  This is just one example of coupling that can create havoc. 

Another effect that is increasing in concern is device “aging”.  Aging is the effect that shortens the life of the transistor/device.  The lower-level effect is based on the pushing the electrons through the transistor channels.  Aging has always existed, but the smaller the device dimension, the greater the probability of an impact.  Charges being trapped where they don’t belong is a significant cause of the problem.  Designs at larger geometries have had more tolerance for these effects.  Smaller geometries increase the probability of this becoming an issue.  Metal migration is another cause of failure.  The dendrites of soldered connections were a significant issue twenty years ago.  Currently there are widespread occurrences in high-performance applications like data centers.  [Ref. 2]   This is a potential problem for the automotive industry.  As the dimensions shrink, there will be more engineering challenges that will need to be solved.  The improved performance at low power and smaller dimensions will provide the opportunity for many new developments.  Working at the nanoscale is introducing new challenges requiring novel solutions.

References:

  1. https://semiengineering.com/transistors-reach-tipping-point-at-3nm/
  2. https://semiengineering.com/what-causes-semiconductor-aging/
Nanotechnology, Semiconductor Technology

A Look at Technology in 2022 – Nanomaterials, MEMS/NEMS, Metamaterials

As 2022 begins, predicting (guessing) where technology will go is almost always wrong.  This is an opportunity to highlight some of the developments that that appear possible to occur.

Nanomaterials:

Nanomaterials have been around for more than a couple of decades.  The initial examples of nanomaterial applications were in commercial products that increased the strength of material while decreasing the weight.  Carbon Nano Tubes (CNTs) were incorporated in Toyota bumpers replacing steel with better performance and less weight.  Zyvex Technologies provided material that was employed to make lighter and stronger tennis rackets among other sports equipment. 

Now research is moving into the realm where there are surprising findings.  A Northwestern professor [Ref. 1] was working on superconducting materials.  The researcher found a material, which is 4 atoms thick, that permits examining in only two dimensions the motion of charged particles.  Existing research on materials can examine particle movement in three dimensions, but not in two.  Future work will be directed at examining possible materials for energy storage.

The ability to design new materials has been inhibited due to the inability of existing laboratory equipment to create conditions required for the formation of the new material.  As the techniques are developed and the equipment becomes available, there should be significant advances in discoveries of the capabilities of new materials.

MEMS/NEMS:

Micro Electro Mechanical Systems (MEMS) are miniaturized systems that can preform at the micro scale the same way typical gears, sensors, transducers, analog clocks work.  The need to shrink the size of a product or function within a product.  Examples of MEMS in today’s world include the airbag deployment sensor in vehicles (accelerometers), devices that detect a person falling, orientation (portrait or landscape) of a smart phone, miniaturized sensors that can be swallowed, and many more.

The process for making the MEMS devices is in many ways similar to semiconductor processing.  The process deviates from the typical semiconductor process in that some of the portions of the device is actually etched away to create the sensitivity/function required by the end product. 

Nano Electro Mechanical Systems (NEMS) is not a term that might be familiar to the general public.  It is similar in function to MEMS but has different properties due to the interactions at the nano scale.  One example [Ref. 2] is from the University of Florida where they have demonstrated efficient mechanical signal amplification using nanoscale mechanical resonators.  They have created “drumheads of thickness from under 1nm to just under 8nm stretched over a 1.8micrometer void.  NEMS resonators will involves some portion of the device that is no more than a few nanometers thick.

Metamaterials:

A metamaterial is a material designed to exhibit specific properties that are not found in naturally occurring materials.   These materials can affect different kinds of waves, i.e., light, electromagnetic radiation, sound.  The key item is that the metamaterials are formulated in repetitive patterns, which patterns are smaller than the wavelength of the wave to be impacted.  Efforts on the index of refraction of lenses are working to create a negative index for specific wavelengths. Reference 3 is an overview of metamaterials at a very high level.  More detailed information is available through internet searches.  

Thoughts:

Research and the equipment needed to develop new materials and material structures are becoming available.  The access to new types of equipment will provide interesting developments at the very small scale of nano and sub-nano.

References:

  1. https://scitechdaily.com/unplanned-discovery-a-super-material-for-batteries-and-other-energy-conversion-devices/
  2. https://phys.org/news/2021-10-highest-amplification-tiny-nanoscale-devices.html
  3. https://www.azom.com/article.aspx?ArticleID=21097  
Misc Ramblings, Science

Ultra-pure material

We already reported on the development of two-dimensional nano-sheet transistors and other two-dimensional materials.  One of the challenges that has been previously mentioned is the fact that we do not know the material properties of “pure” materials.  Current technology provides the ability to achieve purities within a range of parts per million.  In the percentage parlance that is 99.9999% pure material.  It is expensive and needed.  Doping silicon wafers with specific impurities can change the bandgap (conductivity increase or decrease) of the resultant material to enable the desired structure flow of electrons.  While the percentage of material purity Sound excellent, there is another way of looking at the analysis. 

Reference 1 refers to an analysis by Flavio Matsumoto on the number of osmium atoms in a cubic centimeter.  He performs the calculation tow different ways and the answers are close.  Both result in 7 x 1022 atoms, with a fractional difference.  Depending on the specific element, the number will vary.  If the material (osmium in this example) is 99.9999999% pure, there will be 7 x 1013 non osmium atoms in the cubic centimeter.  If one then reduces the size in question to one micron cubed, there will be 7 non osmium atoms present.  What happens when materials get near being without impurities?   

Princeton researchers [Ref. 2] have created a sample of gallium arsenide with impurities of one atom in ten billion.  The size of the material was 5 or 6 millimeters cubed.  To test the material, the cooled it to temperatures equivalent to space and inserted the material in a strong magnetic field.  They were interested in observing the changes in the electron flow.  They were surprised in that they found that many of the advanced physics phenomena were able to be observed at weaker magnetic fields.  From the data, it appears that the resultant effects could be observed at two orders of magnitude less than fields required to observe the phenomena in less pure materials. 

Our current technology permits us to make changes in the electrical conductivity of materials by adding a small amount of specific impurities (doping).  It is also known that adding impurities changes the structural lattice to advance or retard the ability of electrons to move within the lattice.  Since the lattice structure is changed, the question that remains is what happens to various other properties of the material.  In order to address that question, various quantities of absolutely pure material need to be created.  The raises the challenge on developing the processes for removing the impurities for the current methods of obtaining the current “high purity” materials.

This is a non-trivial problem.  Graphene has been manufactured for 20 plus years.  There is still a challenge to obtain a large area of graphene without structural defects.  Impurities can create those structural defects.  The work being done on two-dimensions transistors, which can accommodate some defects, has not developed a process that can be employed to create the billions of transistors required for one microprocessor chip.

The new year, 2022, should provide some interesting developments that further our understanding of pure materials.

References:

  1. https://www.quora.com/How-many-atoms-of-osmium-are-there-in-one-cubic-centimeter-I-honestly-cant-find-any-internet-help-whatsoever
  2. https://phys.org/news/2021-11-ultra-pure-semiconductor-frontier-electrons.html
Science

Experiments & Results

Previously, we have covered the challenges of experiments and the need for reproducibility.  Recently, there has been an additional occurrence that merits discussion.  There has been a study where roughly 100 peer reviewed articles that were published in respectable journals that had sufficient details on the experiments for the results to be validated by other researchers.  In that evaluation, 87 of the published results were unable to be verified by researchers.  In some cases, even the original researcher was unable to obtain results that could be considered within the experimental error.  If this is a problem, what about all the research that is published which does not include enough information to be able to independently validate the initial results.  How can these efforts be considered valid?  That is not a question that will be answered in this blog.

In previous blogs, I have mentioned the use of equipment that might have the same model number as a published experiment.  The fact that the model number indicates the equipment is similar, it may not be identical.  In the particular instance that was observed.  Two researchers locate hundreds of miles apart ran experiments of a nanomaterial. Each had samples of the material from the same lot produced by a reputable manufacturer.  They ran a predetermined process, used similar test procedures and recently calibrated equipment. But, they obtained different results.  They exchanged material samples and each duplicate their original results on the new, to them, material, which differed from their colleague’s results.  The question was what was happening.  It took a lot of work, but they finally found the source of the problem.  In the course of a routine maintenance along with the required calibration, some of the worn parts were replaced by factory “originals”.  One of the “original” parts had gone through a redesign to make a stirring process more efficient.  It worked and was more efficient, but that meant the duration of the process created a different distribution of particle size than the equipment with the original unchanged part.  So, checking just model numbers and type may not be sufficient. 

Another example involved a supplier and a researcher.  There was certification of the properties of material by the supplier, which was retested by the researchers.  The material was shipped via standard commercial transport.  The research tested the material and found different results.  A reshipment of another batch of the certified material was shipped, and the researcher found a difference from the supplier’s certification.  After a number of conversations and equipment checks, it was determined there was an external cause.  The determination revealed that the material, which had been certified, acquired enough oxygen on the surface of the material to change the characteristics of the nanomaterials.  This is an important point to remember with nanomaterials.

While this seemed to cover the possible causes of deflection of results in experiments, a new one was found.  While not involving normal experimental equipment, this one involves a standard, commercial over the range microwave that contains two small lights in its bottom surface.  One bulb failed.  The solution is simple, look at the manual, which identifies the bulb by part number, and order one.  Well, simpler said than done.  The bulb, in this case a 40W bulb, is no longer available.  There is a manufacturer approved 50W specified as a replacement.  Fine, order the bulb and installed when it was received.  The illumination was considerably better than the original bulb next to it.  Over the weeks, noticed that the bottom of the microwave was warm to the touch.  When the second bulb failed, searched for a lower wattage bulb.  There was a 35W bulb, but it was not available.  Decision was made to try to find information on the existing bulb.  It turns out there was some very small marking on the second bulb, which itself is not large.  It was possible to make out a 20W marking.  Two 20W bulbs were ordered and installed upon receipt.  The resulting illumination was of the same level as the original bulbs.  The instructions and part identification in the manufacturer’s manual were wrong!  So someone following the instructions per the manufacturer’s manual would be replacing incorrect parts. 

The point of these examples is that in performing research, no assumption should be made without verification of the equipment, materials, and processes being employed.  Even manufacturer supplied directions/instructions may be in error.

Misc Ramblings, Science

Interesting Times due to Interesting Discoveries

In previous blogs, two dimensional semiconductors have been discussed.  Work at Singapore University of Technology and Design has developed a different approach to solving the issues of very small transistors that will be required for future generations of semiconductors [Ref. 1].  Their work demonstrated that the 2D semiconductors using MoSi2N4 and WSi2N4 form Ohmic contacts with titanium, scandium and nickel.  These structures are free from Fermi Level Pinning (FLP), which is present in other 2D semiconductors.  Their approach to minimize the FLP requires a precise positioning of the metal on top of the semiconductor, which employs the 2D metal as contact material.  Their material is shielded due to the formation of an inert Si-N layer shielding the semiconductor layer from defects and material interactions at the contact interface.  They are planning to employ computer evaluations of similar materials to determine other potential materials.

Work is continuing to find improved material for Chemical energy, which could provide storage via chemical energy that can be converted into mechanical energy.  Chemical energy is a term for the energy stored in covalent bonds holding atoms together in molecules.  Work accomplished at the University of Buffalo, University of Maryland and Army Research Labs evaluated the potential of combining energic materials with ferroelectrics to create a high power density energy source that would be chemically driven.  The reported results are that two dissimilar materials (molecular energetic materials and ferroelectrics) can be combined to obtain chemically created electrical energy with a specific power of 1.8kW/kg.  Employing a polarization of molecular energetic ferroelectrics can provide control of both the energy density and the rate of energy release.

Work at Northwestern University and Argonne National Labs has developed a material that is four atoms thick and allows the evaluation of charged particle motion in only two dimensions [Ref. 3].   The target material was a combination of silver, potassium, and selenium.  Heating this material to over 450F, it became a relatively symmetrical layered structure.  Before the heating, the silver ions were fixed within the two dimensional material.  After the transition due to heating, the silver atoms had small movement.  This discovery has the potential to provide a platform for the evaluation of materials that could be constructed to have both high ionic conductivity and low thermal conductivity.  One of the potential outcomes is the ability to develop membranes that could provide environmental clean including desalting of water. 

A team of researchers at the University of Florida and Florida Institute of Technology have produced a high-efficiency mechanical signal amplification in nanoscale resonators operating at radio frequencies [Ref. 4].  The researchers observed parametric amplification in nanoscale devices.  The device contains a nanoscale drumhead mechanical amplifier consisted of a two dimensions semiconducting  molybdenum disulfide membrane with drum head thickness of 0.7, 2.8, and 7.7 nanometers.  The drum head was 1.8 microns in diameter with a volume of 0.020 m3.  The drum head was fabricated by transferring nanosheet exfoliated from bulk crystals over microcavities to make the thin nanodrums.  Amplification gains of up to 3600 were obtained.  This process can be employed in developing nanoscale sensors and actuators. 

What we are witnessing is the development of new materials that have been able to be developed due to the ongoing research in nanotechnology and the resultant development of tools for analysis of the various materials. 

References:

  1. https://scitechdaily.com/newly-discovered-family-of-2d-semiconductors-enables-more-energy-efficient-electronic-devices/
  2.  https://www.nanowerk.com/spotlight/spotid=58913.php
  3.   https://scitechdaily.com/unplanned-discovery-a-super-material-for-batteries-and-other-energy-conversion-devices/
  4. https://phys.org/news/2021-10-highest-amplification-tiny-nanoscale-devices.html
Misc Ramblings

Evolving nano developments

With the emphasis on nanotechnologies and the emphasis on two-dimensional materials, other areas that are pushing boundaries are often overlooked.  A recent article [Ref. 1] summarizes work done at King’s College London on relieving pain.  Their treatment employs a ultra-low frequency neuromodulation to safely relieve chronic pain.  Neuromodulation employs electrical current to block transmission of pain signals between neurons.  The procedure normally requires implanting a device and sending signals.  Spinal cord stimulation is one example of this procedure.  Unfortunately, the success of that process has been less than desired.  This new method, employs a type of ultralow frequency biphasic current with a period of 10 seconds.  The process mimics the direct current applications, which can cause tissue damage and electrode degradation.  Due to the nature of the alternating polarity there is the potential for much reduced tissue damage.  Work is continuing in the area.  For further information the Wolfson Centre for Age-Related Diseases [Ref. 2] has additional information regarding their ongoing work.

A team of Chinese researchers has developed an interesting “twist” to graphene [Ref. 3].  By placing a second layer of graphene over the first and creating a slight misalignment, they created a Moire superlattice.  As the angle approaches 108 degrees, the material begins to show properties that imply low temperature superconductivity.  The result is that the kinetic energy of the electrons is suppressed and form localized accumulations at points where the two sheets interact.  There are additional effects that include correlated insulator states.   This is important because integrated photonics require nanolasers.  (Data transmission on a chip can travel at the speed of light.)  The work to produce these nanolasers has focused on a number of approaches, but the material properties for th4ese approaches has not been developed at the nano scale required for inclusion on semiconductor devices.  The referenced paper provides specifics on the low power required for lasing.  The researchers indicate their opinion that this development has the potential to impact many fields including “nonlinear optics and cavity quantum electrodynamics at the nanoscale.” [Ref. 4]

Researchers from Rice University and Northwestern University created a stable sheet of double layered borophene [Ref. 5].  This is a material structure is similar to graphene (Carbon sheets).  The atomic number of Boron is 4.  Carbon is 6.  Research is indicating that the borophene has electrical and mechanical properties that could rival graphene.  The difference is that the borophene is much more challenging to create.  The researchers succeeded in growing the material on a metal substrate.  Boron, when attempts to create the double sheet structure, tends to revert to its three dimensional structure.  The researchers think that the borophene structure could produce a much greater type of structures than graphene.  One projection is the potential for inserting a layer of lithium to create a superior two dimensional battery. 

As always, these developments don’t happen over night.  The work on borophene has taken over 6 years.  This will be followed by experimentation to develop a reasonable means of creating the materials. Only after that is available can products be produced that will appear in the public arena.

References:

  1. https://physicsworld.com/a/ultralow-frequency-neuromodulation-safely-relieves-chronic-pain/
  2. https://www.kcl.ac.uk/neuroscience/about/departments/card
  3. https://physicsworld.com/a/moire-superlattice-makes-magic-angle-laser/
  4. https://www.nature.com/articles/s41565-021-00956-7
  5. https://physicsworld.com/a/double-layered-borophene-is-created-at-long-last/
Nanotechnology

More on Semiconductors

Last month, this blog covered some of the challenges of continually shrinking semiconductor geometries and the related difficulties with traditional transistor designs.  This month’s blog explores transistor geometry.  In May 2021, IBM announced a 2nm nanosheet semiconductor technology. [Ref. 1]  Their claim is that this configuration will improve performance while consuming only 45& the energy as the current generation of chips. 

The nanosheet technology is one where the transistors are constructed from three horizontal sheets of silicon with the gate material surrounding the silicon.    The nanosheet technology is projected to replace the existing FinFet technology, which is approaching the limits where is can function properly due to material characteristic limitations.  The transistor configuration has developed through various shape configurations, but the materials have remained the same. [Ref. 2]  The transistor employed in microprocessors consist of a gate stack, a channel region, a source electrode, and the drain electrode.  The latter three are all based on silicon but have different doping atoms (impurity atoms introduced during processing to create the desired electrical properties).  The gate stack consists of a gate electrode on a layer of dielectric material.  The dielectric material is employed to ensure that the electrons flow when appropriately charged and not subject to random leakage of electrons. 

When the size shrinkage became too small to prevent current leakage, today’s FinFET transistor was invented and introduced into production in 2011.  With current production at the 7nm node and moving to the 5nm, Samsung has stated the FinFET design has run out of capability at 3nm.  A new design needs to be developed.  The new design is the nanosheet, although there are a few other names, like GAA (gate-all-around).  The picture below (from Ref. 2) depicts the design evolution of transistors.  The nanosheet design was announced by IBM for the 2nm node (Intel is call the node the 20 Angstrom node).

The introduction of a change in design will provide a number of challenges that manufacturing needs to overcome.  Since there is similarity between the nanosheet and the Fin FET, some of the process learning can be accelerated.   There are innovations required.  [Ref. 3]  The first is the need for epitaxially grown multilayers of Si and SiGe to define the channel.    Next, the nanosheet design requires an inner spacer, which is additional dielectric material for source/drain isolation.  Third is the separation of nanosheets from each other, which requires a very selective etch process.   Finally, there is the deposition and patterning of metal around and in between the nanosheet layers.  As mentioned earlier, this design enhancement is anticipated to reduce power usage by 45%. 

What happens when nanosheet runs it course?  There are designs being considered called forksheet.  [Ref. 3]  The challenges are numerous and also include a concern about electrostatic discharge.  Also, under consideration is something called a CFET or complementary FET.  This design employs vertically stacked nMOS and pMOS. 

Consequently, changes are coming to semiconductors.  The devices will become smaller, more powerful, and use less energy.  The manufacture of the devices will push the existing limits of current manufacturing and require the invention of new techniques and new equipment.  It will be very interesting to observe the changes in devices as semiconductors move to smaller and smaller node and start reducing power required.  Good things are coming.

References:

  1. https://spectrum.ieee.org/ibm-introduces-the-worlds-first-2nm-node-chip
  2. https://spectrum.ieee.org/the-nanosheet-transistor-is-the-next-and-maybe-last-step-in-moores-law
  3. https://www.eetimes.com/entering-the-nanosheet-transistor-era/#

Semiconductor Technology

The Shrinking Dimensions

As semiconductor manufacturing continues the dimensional shrinking process, novel ideas are required to achieve the ability to continue shrinking the dimensions while increasing the performance.  There are some thoughts that this potion of the Moore’s law curve will provide some additional benefits.

As the industry moves into the sub 10 nanometer nodes, physical properties become major role players in what will be able to be manufactured in volume *with sufficient yields) and what won’t work.  Intel has released its roadmap [Ref. 1] for the smaller dimensions and is moving away from the nanometer scale.  After the 3nm node, it will not be the 2nm node. Instead, it will be the 20 Angstrom node. That will be followed by the 18 Angstrom node.  [An Angstrom is 10-9 meters or a tenth of a nanometer.  It is not a new term. Angstroms were used in optics for hundreds of years.].

While the size terminology is interesting, it is the actual physical propertied and the manufacturing of the completed devices that are the objective. 

Filed Effect Transistors [FETs], and specifically MOSFETs manage current flow by controlling current flow.  The gate electrode and the channel are two plates of a capacitor.  Unfortunately, the gate capacitance depends on the material properties at the dimensions required.  The loss of the needed parameters as size shrinks is driving the investigation to what are being called two-dimensional semiconductors. [Ref. 2] More detail on this topic is available in our April 2021 blog. [Ref. 3]

In the article with Intel’s planned roadmap, there is mention that starting at 20A, Intel is considering changing to a “gate all-around” structure [GAA].  The Intel approach is somewhat different from others working on GAA and their modification is being called the ribbonFET.  [Ref. 1]

Another topic of current interest is “chiplets”.  [Ref. 4] The concept behind this is to create individual segments for a system and be able to place them in positions that are favorable to data transfer and processing.  This approach is a means of reducing costs while achieve apparent scaling benefits.  On a larger scale there are many companies working on various approached to reduce dimensions by using multiple levels (this is not multiple layers) with interconnections to complete the system interconnections.  I know of one sensor system that has multiple levels for a complete autonomous sensor system with capabilities of working in temperatures of over 120C and over 10K PSI.  It has sensors and can store data for up to a week before being wirelessly interrogated for data transmission.  This device is under 8mm in diameter.  It will be shrunk to under 5mm in the near future but taking it to under 1,000 microns would be a challenge without moving to something like the chiplet approach.

While the topic of semiconductors is mainly about shrinking feature sizes and increasing processing capability, the real changes will come from nanomaterial characteristics.  The fact that today, researchers are developing a method to layer different 2-dimensional materials together indicates the direction.  For this to be successful, the method to develop large scale 2-D material without defects still remains a challenge.  It will happen, but will require development efforts.

References:

  1. https://www.eetimes.com/intel-charts-manufacturing-course-to-2025/#
  2. https://semiengineering.com/thinner-channels-with-2d-semiconductors/
  3. http://www.nano-blog.com/?m=202104
  4. https://semiengineering.com/piecing-together-chiplets/?cmid=291477a6-f062-4738-b59f-1ec44fd21e39
Misc Ramblings, Nanotechnology