Mitigating Radioactive Waste with Nanomaterials

As the tools available become more precise and develop new capabilities, discoveries arise that are surprising.  Sometimes, this occurs by accident as happened by scientists in Germany, Russia, and Sweden.  The report [Ref. #1] indicate the scientists found a chemically stable compound containing plutonium.  Specifically, the team led by Kristina Kvashnina at the Helmholtz Center Dresden-Rossendorf found that a compound containing plutonium in its fifth oxidations state can remain stable for “long” periods of time.  The team created nanoparticles of plutonium-oxide atoms in different oxidation states. 

 “One of the most fundamental properties of the chemical behavior of plutonium is the variety of its oxidation states.  The oxidation state is defined by the number of electrons that are removed from the valence orbitals of a neutral atom.  In the pentavalent oxidation state, plutonium has three electr4ons in the 5f shell, leaving the 6d orbitals empty. The oxidation state of plutonium determines its chemical behavior and its reactivity.” [Ref. 2]

The team observed the resultant particle structure using X-ray equipment at the European Synchrotron Radiation Facility in Grenoble, France.  They found that the theory agreed with the rapid formation of the plutonium-oxide nanoparticles in its third, fourth, and fifth states.  The sixth oxidation state proved to be different and appeared to have a two-step process that took some time to complete.

The conclusion was that the time lengthening could be due to a stable form of the fifth state.  This had not been observed experimentally before.  Theoretically, if this is correct, it would be possible for the particles could remain stable over months, which would be a major change in our understanding of the chemical properties of nuclear material, particularly nuclear waste.  The team did additional experimentation using high-energy resolution fluorescence and confirmed that a stable form of the fifth oxidation state has actually be observed.  They further confirmed the compound’s stability by drying it our of liquid suspension and measuring the absorption spectrum over time. 

 So, what is the significance of a stable state of plutonium-oxide? 

One of the most significant issues facing the nuclear power industry is the disposal of “spent” radioactive material for reactors.  Plutonium plays a prominent role in nuclear energy production as well as nuclear weapons.  The development of storage methods for nuclear waste is challenging to say the least.  With various materials capable of remaining radioactive for tens of thousands of years, the design of containment that will safeguard the material is difficult.  There are ongoing studies on ways to mitigate the long-term effects of the radioactive material.  This research may provide researchers a methodology to create nanomaterials that mitigate the effects of the radioactive materials by converting them into stable compounds.  This work provides a new method of evaluating was to handle long-term storage of radioactive waste.

References:

  1. https://physicsworld.com/a/surprisingly-stable-plutonium-compound-could-affect-nuclear-waste-storage/
2.      “A Novel Metastable Pentavalent Plutonium Solid Phase on the Pathway from Aqueous Plutonium(VI) to PuO2 Nanoparticles ” https://onlinelibrary.wiley.com/doi/10.1002/anie.201911637

Posted in Uncategorized

New nanomaterials and implications

With all the publications touting new materials or the creation of new properties of materials, it is often difficult to identify disparate findings into something that could combine into very useful devices.  We will consider two findings.

 A team of scientists from the University of Vermont. Lawrence Livermore National Labs and other Labs developed a form of silver with enhanced strength.  [Ref. 1] They found a new nanoscale mechanism that enable improving metal strength without reducing electrical conductivity.  Currently, in order to improve strength related properties like reducing brittleness or softening, various alloys are employed to make the materials stronger.  This improvement has led to a decrease in electrical conductivity.

Starting with the fact that as grains of material get smaller, they get stronger.  When the size becomes less than tens of nanometers wide, the boundaries between grains become unstable and can move.  One approach to improving the strength in metals like silver is to create a special type of grain boundaries known as coherent twin boundaries.  These boundaries are very strong but deteriorates when the size is less than a few nanometers due to imperfections in the lattice. 

The researchers have developed an approach to create a “nanocrystalline-nanotwinned metal.  Employing a small amount of copper atoms, which are slighter smaller than silver atoms, to move into defects in both grain and twin boundaries.  This has created a super strong form on sliver with the conductivity of silver retained.  It appears that the copper atoms move into the interface and not into the main part of the silver structure. 

This effort overcomes softening previous observed as the grains get to small, which is called the Hall-Petch breakdown.  The researchers a confident that their findings can be applied to other materials.

+

Researchers at the University of Porto (Portugal) found a negative thermal expansion (NTE) effect in Gd5Si1.3Ge2.7 magnetic nano granules. [Ref. 2] Most material expand (Positive Thermal Expansion – PTE) when heated and contract when cooled.  This material is part of a family of materials that has important implications in the development of future devices.

Work in materials like glass has been developing glass that has Low Expansion Coefficients for more than 50 years.  This type of precision in glass is required to manufacture the very high-resolution optical telescopes. 

The ability to create materials that combine both positive expansion and negative expansion to electronics would enable the development of more robust material inter-connections (contacts) that can withstand the rigors of wide temperature extremes. 

+

Why are these findings important?  Currently, efforts are being made to create longer lasting electronics.  Material fatigue and power consumption are two major concerns.  What electronics are available that will be functioning 100 years from today?  Excluding the obsolescence factor, there is nothing that will be working as designed.  The current exploration conversations are about considering project to both the Moon and Mars.  There are space probes that have been functioning for tens of years.  Other probes stop functioning “mysteriously”.  Multiple redundancy provides a partial work-around.  But, how much redundancy is allowable when the “mission” is weight limited.  There are no manufacturing facilities on places that are being considered for human exploration.  The ability to manufacture devices are better able to withstand temperature extremes, use power more efficiently, and remain operations for longer periods of time are needed.  The research mentioned above is not the answer, but may be the first steps to a solution.

References:

  1. https://www.uvm.edu/uvmnews/news/inventing-worlds-strongest-silver
  2. https://physicsworld.com/a/giant-negative-thermal-expansion-seen-in-nanomagnet/?utm_medium=email&utm_source=iop&utm_term=&utm_campaign=14290-44232&utm_content=Title%3A%20Giant%20negative%20thermal%20expansion%20seen%20in%20nanomagnet%20-%20Editors_pick

Posted in Uncategorized

Nanotechnology is getting interesting

In our June 30 blog, we covered Part 2 of an upcoming technology disruption.  This blog is covering material from the IEEE Spectrum magazine [Ref. 1] on a Carbon Nanotube microprocessor.  A more detailed article is available from Nature [Ref. 2].  CNT transistors have been around for more than 10 years.  There have even been some processors assembled into extremely simple “computers”.  This device contains nearly 15,000 transistors.  It has the ability to say “Hello”, which is the traditional test of a functioning computer.  Current microprocessors contain billions of transistors, so there is still a long way to go, but it is a start. 

Key facts of the development created by a team from MIT and Analog Devices include it is a fully programmable 16-bit carbon nanotube microprocessor.  It is based on the RISC-V instruction set and can work with both 16-bit and 32-bit instructions.  In order to accomplish this, there were three problems that needed to be overcome. 

In our opinion, the most challenging was the fact that there is no process that will create only semiconducting CNTs.  There is always a mix of metallic with the semiconducting CNTs.  There are indications that today’s best processes for semiconducting CNTs can produce four 9s semiconducting purity but not the eight or nine 9s required for a robust manufacturing process.  The issue with the impurity is an increase in signal noise. 

As with many breakthroughs, the solution is different from developing additional ways of increasing the semiconducting CNT purity.  The researchers worked with various circuit designs to analyze the capabilities of the designs.  What was found is that there is a pattern of results that suggested certain combinations of logic gates were better in significantly reducing the noise.  The power waste issue turned out to be minor compared to the noise issue.  With this information, they developed a set of design rules that permits large scale integration of CNTs with readily available purity.

The issue how to create the circuitry was the first one solved to get to the important development above.  Typically, CNT transistors are created by spreading a solution with CNTs uniformly across the surface of a wafer.  The issue they had is that there are aggregates or clumps of CNTs bundled on the surface.  Obviously, this is unacceptable due to the clumps being unable to form transistors.  The solution was to use the fact that single CNTs are held to the surface by van de Waals forces and bundles of CNTs are not; so, it is possible to remove the bundles. 

The third challenge is that for CMOS logic, both NMOS and PMOS transistors are required.   It is not practical to try to dope individual structures to provide the desired N or P characteristic.  The researchers employed a dielectric oxide to either add or subtract electrons.  Using atomic layer deposition (ALD), they were able to deposit an oxide with the desired properties one atomic layer at a time.  By selecting the proper materials, the researchers were able to reliably create PMOS and NMOS devices together.  This process is a low temperature process that permits building layers of transistors between levels of interconnects.

The interesting concept provided in developing the CNT microprocessor is that the processes employed are currently employed in semiconductor manufacturing.  The development of this type of concept into a full, high volume device does not require the development of a new industry with new tooling requirements.  It is possible that as the techniques evolve, current semiconductor manufacturing companies could evolve the processes and not have a major retooling effort.  This fact could encourage the hastening of acceptance of CNT transistors.  It will be interesting to observe the progress over the next few years.

References:

  1. https://spectrum.ieee.org/nanoclast/semiconductors/processors/modern-microprocessor-built-using-carbon-nanotubes?utm_source=circuitsandsensors&utm_medium=email&utm_campaign=circuitsandsensors-09-03-19&mkt_tok=eyJpIjoiTWpSa01HVTNZemN4TXpnNCIsInQiOiJhQTkxMkYyNTF5SVFyZzY0eXhHZmVUMlJhZ2hzeml2TE94KzVEdkR6cHIzMHFkUmhOZ0ZJYzBEMlRrUEZHaDNWRFhjcWdZRUlUUkdmeHc0Z3NNZmFoUEUycTFNWjFHSmhcL3NKNSt6VllYYVVCUDRLWm9qTURrcWtKSElxcVA3UmMifQ%3D%3D
  2. https://www.nature.com/articles/s41586-019-1493-8

Posted in Uncategorized

Reliability

It might seem odd to discuss reliability in a blog that focuses on nanotechnology and its implementation.  However, if one considers that we are using carbon nanotubes (CNTs) in many different products to increase strength and reduce weight along with the semiconductor circuitry that has nano-scale components, then it might not be too unusual to consider reliability.    

Over the last twenty years, there has been a lot of controversy about CNTs.  The shape of the CNTs is like a short, straight stick, albeit at the nano dimensions.  The appearance of the CNTs resembles five of the six types of asbestos particles.  Consequently, this similarity raised concern and a number of studies were undertaken.  Results came out on both sides of the issue.  Some results showed potential danger and other results indicated with precautions, the CNTs are safe.  (We have covered Nano-Safety a few times, so we will not add more here.)

Composite materials employing CNTs for strengthening work well and when the composite is damaged, the release of any of the CNTs appears to be minimal.  Overall, the application of the CNTs provides significant benefit for long term usage of items containing the CNT embedded composite.  The usage of CNTs in products has only been a few decades.  This is not long enough to evaluate the long term effects. 

Many attempts have been made to develop carbon-based materials for electronics.  Graphene has been covered in detail within this blog.  The application of both nanotube and various two-dimensional electronic materials appears to have had moderate success.  The question is how to mass produce items/devices in the billions as are being done with semiconductors.  This is a challenge.

Semiconductors are coming into the nano-scale regime even if it is not through new electronic circuit components.  The line width of circuit features is well below 50nm.  The experimental efforts today are at 5nm and below.  The circuit designers, material scientists, and semiconductor fabricators are aware of the potential current changes due to leakage of electrons through materials that are barriers to this movement at larger dimensions.  New materials are being developed and evaluated.  Circuitry mis-performance is always a concern. 

The challenges of electronic circuits are what happens when the device ages.  Historically, although semiconductors are relatively new, the consumer applications that employ the sophisticated electronics were limited to a narrow range of temperatures.  These devices are increasingly being employed in automotive, industrial, and medical applications. Consequently, reliability has become much more important. The aging of devices is becoming a more critical element of the design structure.

The car or truck that we own has become among the most electronically controlled device that we have. The temperature extremes that a vehicle can be subjected to, especially under the hood, can range from -40 F or lower to well above 150 F. An example given is that an automobile that is purchased and driven in Arizona, for example, is subject to significantly different conditions from an automobile that is used in Alaska. There is also a difference in the usage of the electronics. And automobile that is used for work or pleasure will have a limited number of trips during the course of the day or week. A vehicle like a taxi or delivery truck will be under constant usage and subjected to a lot more stress. [Ref. 1]  these become part of the new challenges for devices as he electronics becomes more integral into everyday life. The medical field also has similar issues regarding reliability of equipment that is used for performing medical exams or procedures. When a life-sustaining device is employed, the reliability of the operation of that device is critical.  Failure is not an option.

The point of this particular blog is to raise awareness of the fact that the reliability of electronics needs to increase. There is work that is being done that will improve the situation. But, what do we do to build a device which can last and perform its intended function for 100 years, or 1,000 years!

References:

  1. https://semiengineering.com/circuit-aging-becoming-a-critical-consideration/

Posted in Uncategorized

Nano-Safety

It’s been a while since nanotechnology safety was featured in this blog.  This blog has previously mentioned the lawsuit against Johnson & Johnson (J&J) regarding the talc in their talcum powder.  The contention was that there was asbestos in the talc employed by J&J which caused the women’s illness.   There is one source identified that definitely has asbestos material.  The first award in that case was made despite the J&J claimed that they had not used talc from that source and the fact that the woman could not remember if she ever used the J&J product.   Recently, Bayer AG lost a case with a $2B award to two claimants who developed cancer that was caused by Bayer’s glyphosate-based Roundup weed killer. (Ref. 1) This award was despite the fact that the U.S. Environmental Protection Agency has concluded that there are no risks to public health from the current registered uses of glyphosate.

So what does this have to do with Nano-Safety?  There are so many unknowns with nanomaterials, that the long-term effects are not fully comprehended.  As diagnostic science improves, there will be more potential issues found.  Add to the litigation increases and the vast unknowns, there are a number of reports where “scientific mischief” occurs.  (This phenomenon has been covered in previous blogs.)  The only practical method to attempt to mitigate negative consequences of working with nanomaterials is to implement a comprehensive program of nanotechnology safety.

Given the fact that the understanding of Nano-Safety requires a complete knowledge of material properties, particle behavior, toxicity, impact on the environment, proper handling and protection, etc., the ability to meet all these conditions does not exist.  Therefore, it becomes necessary to develop an approach to ensure that proper procedures are established and maintained, worker safety is provided and routinely checked, equipment is appropriate for the task and monitored, etc., which requires a planned approach.  The initial White Paper on Nano-Safety (Ref. 2) characterized four areas to be addresses: 1) Material Properties; 2) Impact on People and the Environment; 3) Handling of Nanomaterials; and 4) Business Focus.  A synopsis of the key thoughts in the White Paper is in the next few paragraphs.

Understanding Material Properties requires knowledge of what properties need to be investigated.  That is a challenge since the properties are unknown.  A recent report indicated that a twisted layer of graphene demonstrated some magnetic storage potential.   So where does one start?  The need is to begin with the obvious challenges, like size or shape, that are similar to know issues and the continually explore research  to obtain knowledge of new properties.

The key objective is evaluating the Impact on People and the Environment.  The is non-trivial and will not be accurately understood in the new future.  That is why the next section becomes critical.  The evaluation of workers before, during, and after leaving their nanotechnology efforts is accomplished to provide long term learning of potential effects. 

Proper Handling of Nanomaterials is the only means to work with materials with unknown properties.  Procedures need to be established and followed.  Monitoring of storage, usage, and disposal of nanomaterials needs to be treated as if everything unknown is potentially hazardous. 

The Business Focus is important.  The organization needs to protect its personnel.  Typically, this is done by instituting established guidelines.  While there are some guidelines, in general, the guidelines can not cover all unknowns.  This requires continual monitoring and record keeping. 

Even doing everything possible does not guarantee that everyone and everything will be safe.  It does provide a basis for protecting people and the environment.  With continual updates to procedures, this process will minimize potential risks.  An Attitude of Nano-Safety is required.

References:

  1. https://www.reuters.com/article/us-bayer-glyphosate-lawsuit/california-jury-hits-bayer-with-2-billion-award-in-roundup-cancer-trial-idUSKCN1SJ29F
  2. http:// http://www.tryb.org/a_white_paper_on_nano-safety.pdf

Posted in Uncategorized

A coming disruption in technology? Part 2

The design of the semiconductor circuit components has been evolving as the feature size shrinks.  The semiconductor industry has been able to develop new designs as older techniques become restrictive due to various material properties.  This blog is a short review of the history of development of device structures with some thoughts on the coming changes.  There were many different types of transistors during the 1950s, 1960s, and into the 1970s.  These can be found in many places on-line, including Wikipedia [Ref. 1].

The field effect transistor (FET) has been a staple of semiconductor manufacturing for years.  The majority of the devices produced through the 2000s were complementary metal-oxide-semiconductors (CMOS).  The picture below, from reference #2 shows the components of the FET.

The manufacture of FETs are directly ono the silicon wafers.  The gate electrode is the control for the flow of the charges, either electrons or holes.  The structure is structured with planar layers, which gives rise to the term “planar gate”.  As the size of the semiconductor components, the dimensions became smaller.  The shrinkage of the gate length increased the switching speed with the result faster electronics.  Problems began to arise as the gate length shrunk toward 20nm.  The current could flow without any control by the gate electrode. 

The issue of not having enough surface area to control the FET was addressed with the development of the FinFET.  The picture below is from reference #3.  The gate material is vertical and provides more volume for controlling the flow of the charge.  The “source” and “drain” are also vertical (fins) that provide a better control that operates at a lower voltage, which also provides better performance.  Unfortunately, the design is more complex and challenging to produce.  Semiconductor manufacturing has kept up with these challenges, but ….. 

F

With the current view of the future of production, FinFET are starting to reach their limit.  So, a different solution needs to be developed.  There are some thoughts about increasing the “fins” to cover more contact area.  This is shown below as the Gate-All-Around FET [Ref. 4].  Others are looking at different designs.  The Multi-bridge Channel FET was identified in the early 2000s, but manufacturing technology was not available. 

Coming soon?

The development of the GAAFET was led by IBM.  Samsung has move beyond the collaboration to the MBCFET, which it contends will be in production for the 3nm node. 

There are challenges to gate-all-around designs.  There is only modest improvement over FinFets at 5nm.  The manufacturing technology is challenging [Ref. 6] with tighter manufacturing requirements due to greater complexity.  The current version of the devices requires a silicon, silicon-germanium, and silicon super lattice stack.  More details of the process is available in reference 6.  The projection is for the introduction on the GaaFET at 3nm.  

Historically, each succeeding node brings with it move transistors per unit area with lower power and higher performance at a lower cost per transistor.   Due to the greater challenges in producing the smaller and smaller transistors, that fact is no longer true.  The question I s not whether or not the technology will continue to advance, but how will the technology transform to continue to provide more capable devices.

References:

  1. https://en.wikipedia.org/wiki/Transistor
  2. Chapter 2: Nanotechnology in Electronics, Larry Larson et al, Nanotechnology for Sustainable Manufacturing. Pp. 17-35. CRC Press. David Rickerby editor.  2016.  ISBN13: 978-1-4822-1483-3
  3. https://blog.lamresearch.com/tech-brief-finfet-fundamentals/
  4. https://www.pcgamesn.com/samsung/3nm-gaa-beat-tsmc
  5. https://wccftech.com/samsung-announces-3nm-mbcfet-process-5nm-production-in-2020/
  6. https://semiengineering.com/5nm-vs-3nm/

Posted in Uncategorized

A coming disruption in technology? Part 1

There have been many years since the 1998 announcement of carbon nanotube transistors.  The hope has been for the semiconductor industry to create a means of producing a carbon nanotube computer.  First, why is there a need to replace the current semiconductor manufacturing process?  The ability to reduce the minimum feature size dimension by 30% has been occurring on re regular basis of 18 to 24 months for decades.  A 30% reduction results in the dimension shrinking to 70% of the previous size.  With both length and width shrinking 70%, the are required is 49% of the previous area.  Consequently, twice as many features can be fabricated in the same area as previously possible. 

The imaging process (lithography) is limited by the minimum feature size possible from the source that produces the images.  Lithography tools produce images that have much greater precision than the best photographic lenses available today.  In addition to the high precision of the lithography equipment, there are a number of optical tweaks that can further refine the image quality.  There is a physical limit on how far engineering can push the physical limits of imaging.  Optical lithography went from visible wavelength to ultraviolet wavelengths (shorter wavelengths) and then to even shorter wavelengths.  The current latest lithography tools employ a wavelength of 13.5 nanometers (nm).  This is being used to produce images that have one of their dimensions under 10nm.  (If interested in further reading on the latest processes, please see reference #1.)

There are challenges as the dimensions shrink.  Multiple process steps require higher precision equipment and more costly steps.  Additional manufacturing steps increase the amount of time required to manufacture the semiconductors.  Additional steps also provided opportunities for decreasing the yields of the final devices.  All of this raises cost.

As dimensions shrink, there is potential for unwanted electrical characteristics to become present.  This degrades the performance of the devices.  As the size shrinks and more electronics circuitry is incorporated, the total distance the signals must travel in a device increases.  There are some estimates that 50% of the power for the leading edge devices is used to move the electrical signals through the circuitry. 

Why does the industry continue on as it has in the past?  Currently, there is nothing else that can produce billions of connected transistors every second.

According to a C&EN article (Ref. 2), DARPA is interested in carbon nanotube circuits.  There are a number of methods to apply the nanotubes into circuitry that will provide better performance with lower power consumption.  One of the challenges mentioned in the article is the caution needed in handling some of the exotic materials within the clean space required for semiconductor manufacturing.    Many materials can have very detrimental impacts of the production process. 

Back in the 1990s, the Advanced Technology Development Facility of SEMATECH instituted a process where they could annually handle as many as 40 different exotic and potentially process disastrous materials without danger of contamination.

The process for producing the carbon nanotubes requires a seed for growth of the tubes.  Typically, the seed material is one that should not be permitted to be the clean environment of semiconductor manufacturing.  It can be done correctly and provide the safety controls required to manufacture the semiconductors.

Part 2 will cover the possible types of transistor designs under consideration and some of the issues in implementing them into existing manufacturing.

References:

  1. https://spectrum.ieee.org/semiconductors/nanotechnology/euv-lithography-finally-ready-for-chip-manufacturing
  2. https://cen.acs.org/materials/electronic-materials/Carbon-nanotube-computers-face-makebreak/97/i8

Posted in Uncategorized

Nanotechnology is spreading its wings

Nanotechnology has been around for a while.  There have been lots of promises resulting in a few solid applications, a number of promising medical advances, and a number of possibilities, which are still in the research stage.  What is often overlooked is the advances made in science to be able to evaluate the new materials.  Microscopy has advanced to the point where pictures of bonding between atoms can be recorded.  Modeling capabilities have improved to the point where some material properties can be predicted.  There is the ability to layer 2-D sheets of materials to create desired properties.  It is possible to determine substitute materials for applications that have a requirement to not employ certain chemicals or materials.  As researchers learn more about the properties of material, they are able to develop new materials to solve existing problems that have arisen due to existing material hardness or toxicity of materials currently required.  Below are two examples of work being done that demonstrate the above statement.

Research at IIT Bombay [Ref. 1] has been focused on changing the material in Piezoelectric Nanogenerators (PENGs) in order to be able to incorporate energy harvesting devices into the human body.  The detractor has been the need to use highly toxic ceramics, of which lead zirconate titanate (PZT) is a primary example.  The researchers worked with a hybrid perovskites that should not have had the ability to produce spontaneous polarization.  (Perovskite refers to any material that has the same type crystal structure as calcium titanium oxide.  Perovskites have had a major impact on the solar industry in the last few years.)  This work demonstrated a far greater piezoelectric response than the best non-toxic material, BaTiO3.

After the original results, the researchers considered enhancements of incorporating the material into a ferroelectric polymer.  The results were surprising in that they doubled the power over their original material.  They have achieved a production of one volt for a crystal lattice contraction of 73 picometers.  There is much work yet to be done, but there is a long list of possible medical applications that could apply this material.

Work performed at Zi’an Jiaotong University is based on a soft dielectric material that creates voltage when bent [Ref. 2].  When certain materials are non-uniformly deformed, the strain gradient creates a separation of positive and negative ions, which develops a voltage across the material.  This action can be observed in many different dielectric materials.  Unfortunately, this effect is strongest in brittle ceramic materials.  The brittleness property makes the materials unsuitable for applications, like stretchable electronics.  The researchers have developed a process to add a layer of permanent negative charges within certain materials.  The material at rest, no stress, has no voltage between the top and bottom of the material.  If a bar of the material is clamped on both ends and the middle of the bar subject to a deforming process, high voltages can be developed.  The researchers reported measuring -5,723 Volts. 

The idea is that with new tools and the resulting knowledge obtained, it is possible to look at applications that are needed that currently require materials that are not suitable for the environment of the applications.  In many cases this environment is inside the human body. 

References:

  1. https://physicsworld.com/a/perovskites-perform-well-under-pressure/?utm_medium=email&utm_source=iop&utm_term=&utm_campaign=14259-41937&utm_content=Title%3A%20Perovskites%20perform%20well%20under%20pressure%20-%20research_updates
  2. https://physicsworld.com/a/new-flexoelectret-material-creates-thousands-of-volts-when-bent/?utm_medium=email&utm_source=iop&utm_term=&utm_campaign=14259-41937&utm_content=Title%3A%20New%20%E2%80%98flexoelectret%E2%80%99%20material%20creates%20thousands%20of%20volts%20when%20bent%20-%20research_updates

Posted in Uncategorized

Move over “Nano”, “Pico” is coming.

Nothing remains on top (or at the bottom) forever.  “Pico” is three orders of magnitude smaller than “nano”.  A rule-of-thumb is that to be able to manufacture or produce something, one must be able to measure to at least an order of magnitude smaller and more likely two orders-of-magnitude smaller.  Why?  In order to mass manufacture something, one needs to be able to make it so that there is consistency from one item to another.  If pieces go together, they need to have a tolerance so that the fit is acceptable. 

There is a story that circulated during the early application of robots that the robots could not do what assembling humans were doing.  It turned out that the parts being assembled had a slight variation in the centering of the pieces.  The human worker could make an adjustment, but the robot could not make that adjustment.  The tolerances were changed and the robots were “happy” and assembled the parts.  The tolerances had to be made smaller for the automated assembly to function properly. 

So what does that have to do with “pico”?  Please bear with me on getting to the point why “pico” is needed.  As the reader is probably aware, the semiconductor industry has been steadily decreasing the size of the size of the circuitry in the semiconductors.  The reduction in the minimum feature size dimension has been reducing at of rate of 0.7 every roughly 18 months.  (A 0.7 reduction in dimension results in an approximately 50% area reduction.)  This observation was first quantified by Gordon Moore of Intel and has become known as Moore’s Law.  The current “next” generation being developed for semiconductors is below 10nm. 

Semiconductor manufacturing consists of processing many 10s of layers, each layer contains different portions of the final circuitry.  The material on each of the layers is modified by various process that create the desired features.  The process involves coating a material, exposing the desired pattern, and then removing or adding specific materials. 

The features for the semiconductor are produced by illuminating a wafer with light that is modified by an imaging mask.  The mask contains the features that will produce the desired images.  The projection illuminates a resist to create a pattern that can be etched to remove the unwanted portion of the surface.  Typically, the mask has features at a magnification of the final image.  The optical system reduces the image to create the desired feature size. 

Roughly, the minimum “achievable” image spot definition produced by a light source is defined by the wavelength.  The wave nature of length produces diffraction patterns, which have alternating rings of light and dark.  Lord Rayleigh defined the minimum separation at which it is possible to identify separate points is when the center of the one spot falls into the first dark ring of the other spot.  The actual separation also includes a relationship to the diffraction limit of the angular illumination from the lens system.  In this discussion, we will use the wavelength divided by 4.  (This would require extremely good optics.)  The current semiconductor tools in widespread production employ 193nm wavelength sources.  With that assumption, it would appear that spots of 50nm could be produced.  But, there are other influences that degrade the images. A key factor is aberrations or distortions introduced by the lenses.  Another issue is that the images produced need to have vertical sidewalls and not the slope of two overlapping images.  Add to that the fact that resists do not produce smooth patterns at the nanoscale size.  The molecules of the resist cause roughness.  Then the actual lens system is also limited by the index of refraction of the lens material (for transmissive systems) and the numerical aperture, which is a function of the focus angle of the lens system.  The list of imperfections that degrade the final image.

How do we produce such small feature as are being done today?  There are many factors.  The application of immersion techniques to the 193nm systems has further reduced the feature size possible.  The development and application of mathematical techniques that evaluate the interference of light provides the ability to actually use features on the mask to prevent portions of adjacent features from being imaged.  (For a more in-depth understanding of these techniques, publications by Chris Mack are recommended.  See reference 1.)  These imaging systems, known as Lithography systems, are very large and quite expensive.  There are many engineering innovations that have become part of the existing Lithography systems.

The optics are the key to successful imaging.  Surface variations in the lens cause defects in the image on the surface.  The surface smoothness of a lens from the theoretical design is measured and the resultant number is called the Root Mean Squared (RMS) error.  RMS is basically an estimation of the deviation from the perfect design.  The current state-of-the-art for high resolution camera lenses is roughly 200nm RMS. 

As presented in the Keynote Session presentation at the Advanced Lithography Conference, the current RMS for the latest Lithography systems in production is less than 1nm.  In this presentation, it was stated that the value for the coming image sizes requires the Lithography systems’ RMS to be 50pm.  That is picometers.  One picometer is one thousandth of a nanometer.  [Ref. 2]  This talk referenced a slide from Winfried Kaiser that showed the equivalent of 50pm corresponds to variations across the length of Germany, across 850 Km, would need to be less than 100 micrometers. 

As the feature sizes continue to shrink, the variations in properties will be in the picometer range.  It will probably start out as a decimal point and a nanometer reference.  Nanometers were first referred to as micrometers with a decimal point before the numbers.  “Pico” is coming to nanotechnology.

References:

  1. Chris Mack has various explanations on optical techniques available at http://www.lighoguru.com
  2. Bernd Geh, EUVL – the natural evolution of optical microlithography, Advanced Lithography 2019, Conference 10957, Extreme Ultraviolet Lithography, Keynote Session, February 2019, San Jose, California. 

Posted in Uncategorized

The Challenges of Reproducible Research

Twice last year, April and November, this blog addressed fake science.  That challenge has not gone away based on recent news articles.  But if one assumes the research is accurately reported and the independent research has differing results, the question is: “How can that be explained?”

One of the first considerations needs to be the equipment employed.  Ideally the research report will include the type (make and model number) of equipment employed in the experiments.  Equipment model numbers would provide a means of determining the capability of equipment identified to provide both accuracy and precision of results.  One could assume that this should be sufficient to get good correlation between experimenters.

One example of experimental correlation that I have close knowledge about was between researchers in Houston, Texas and Boston, Massachusetts.  The experiments involved evaluating properties of graphene. A very detailed test was conducted in the Houston lab.  The samples were carefully packaged and shipped to Boston.  There was no correlation between the two researchers on the properties of the graphene.  The material was returned and reevaluated.  The results were different from the original tests.  Eventually, it was shown that the material had picked up contamination from the air during transit.  This led to a new round of testing and evaluations.

The surprise was that the results from testing by the different researchers were close but not identical.  So, another search was started.  Equipment model numbers were checked, and then serial numbers were checked and sent to the equipment supplier.  It turned out that during the production run, which was lasted a number of years, a slight improvement was made in the way the equipment evaluated the samples.  This was enough to change the resulting measurements at the nanoscale.  Problem solved.

This example points to the need not only for calibration but also working with other researchers to determine discrepancies in initial research results.  Reports need to include test procedures and equipment employed.  Ideally, it should include some information on calibration.

In many cases, we take calibration for granted.  How many people assume that their scales at home are accurate?  How many check oven temperatures?  The list goes on.  I have been in that group.  We replaced our old range/oven this past December.  The oven appeared to be working fine until Christmas cookies were baked.  They came out a bit overdone.  The manufacturer indicates in the instructions that it may take time to make slight modifications in how you bake.  I decided to check the actual temperatures.  I found that a setting of 350oF resulted in an oven temperature of 350oF.  However, a setting of 400oF resulted in an oven temperature of 425oF, which is enough difference to over bake the cookies.  Almost weekly visits over two months by the manufacturer’s service technician has resulted in replacement of all replaceable part and a number of recalibrations with no apparent change in performance.  After the first few weeks, I decided to do a more thorough test of the oven.  (My measurements agree with the technician’s better equipment to within a couple of degrees.)  What I found is that at 350oF the oven is solid.  At 360oF, the oven temperature is almost 395oF.  At higher temperatures, the difference between the oven setting and actual reading decreases to 25 degrees.  Between 450oF and 500oF the temperature difference decreases from a plus 25oF to a minus 5oF.  The point of this example is that assuming calibration is accurate even for common appliances is not something to be taken for granted.  Calibration must be done, and calibration for scientific equipment must be done regularly to ensure accuracy of results.

Disagreement among researchers in test results does not indicate there are errors in the experiments, but that the equipment could be out of calibration.  It is critical to verify calibration of equipment employed in research before claiming faulty research.

Posted in Uncategorized