A coming disruption in technology? Part 1

There have been many years since the 1998 announcement of carbon nanotube transistors.  The hope has been for the semiconductor industry to create a means of producing a carbon nanotube computer.  First, why is there a need to replace the current semiconductor manufacturing process?  The ability to reduce the minimum feature size dimension by 30% has been occurring on re regular basis of 18 to 24 months for decades.  A 30% reduction results in the dimension shrinking to 70% of the previous size.  With both length and width shrinking 70%, the are required is 49% of the previous area.  Consequently, twice as many features can be fabricated in the same area as previously possible. 

The imaging process (lithography) is limited by the minimum feature size possible from the source that produces the images.  Lithography tools produce images that have much greater precision than the best photographic lenses available today.  In addition to the high precision of the lithography equipment, there are a number of optical tweaks that can further refine the image quality.  There is a physical limit on how far engineering can push the physical limits of imaging.  Optical lithography went from visible wavelength to ultraviolet wavelengths (shorter wavelengths) and then to even shorter wavelengths.  The current latest lithography tools employ a wavelength of 13.5 nanometers (nm).  This is being used to produce images that have one of their dimensions under 10nm.  (If interested in further reading on the latest processes, please see reference #1.)

There are challenges as the dimensions shrink.  Multiple process steps require higher precision equipment and more costly steps.  Additional manufacturing steps increase the amount of time required to manufacture the semiconductors.  Additional steps also provided opportunities for decreasing the yields of the final devices.  All of this raises cost.

As dimensions shrink, there is potential for unwanted electrical characteristics to become present.  This degrades the performance of the devices.  As the size shrinks and more electronics circuitry is incorporated, the total distance the signals must travel in a device increases.  There are some estimates that 50% of the power for the leading edge devices is used to move the electrical signals through the circuitry. 

Why does the industry continue on as it has in the past?  Currently, there is nothing else that can produce billions of connected transistors every second.

According to a C&EN article (Ref. 2), DARPA is interested in carbon nanotube circuits.  There are a number of methods to apply the nanotubes into circuitry that will provide better performance with lower power consumption.  One of the challenges mentioned in the article is the caution needed in handling some of the exotic materials within the clean space required for semiconductor manufacturing.    Many materials can have very detrimental impacts of the production process. 

Back in the 1990s, the Advanced Technology Development Facility of SEMATECH instituted a process where they could annually handle as many as 40 different exotic and potentially process disastrous materials without danger of contamination.

The process for producing the carbon nanotubes requires a seed for growth of the tubes.  Typically, the seed material is one that should not be permitted to be the clean environment of semiconductor manufacturing.  It can be done correctly and provide the safety controls required to manufacture the semiconductors.

Part 2 will cover the possible types of transistor designs under consideration and some of the issues in implementing them into existing manufacturing.

References:

  1. https://spectrum.ieee.org/semiconductors/nanotechnology/euv-lithography-finally-ready-for-chip-manufacturing
  2. https://cen.acs.org/materials/electronic-materials/Carbon-nanotube-computers-face-makebreak/97/i8

Posted in Uncategorized

Nanotechnology is spreading its wings

Nanotechnology has been around for a while.  There have been lots of promises resulting in a few solid applications, a number of promising medical advances, and a number of possibilities, which are still in the research stage.  What is often overlooked is the advances made in science to be able to evaluate the new materials.  Microscopy has advanced to the point where pictures of bonding between atoms can be recorded.  Modeling capabilities have improved to the point where some material properties can be predicted.  There is the ability to layer 2-D sheets of materials to create desired properties.  It is possible to determine substitute materials for applications that have a requirement to not employ certain chemicals or materials.  As researchers learn more about the properties of material, they are able to develop new materials to solve existing problems that have arisen due to existing material hardness or toxicity of materials currently required.  Below are two examples of work being done that demonstrate the above statement.

Research at IIT Bombay [Ref. 1] has been focused on changing the material in Piezoelectric Nanogenerators (PENGs) in order to be able to incorporate energy harvesting devices into the human body.  The detractor has been the need to use highly toxic ceramics, of which lead zirconate titanate (PZT) is a primary example.  The researchers worked with a hybrid perovskites that should not have had the ability to produce spontaneous polarization.  (Perovskite refers to any material that has the same type crystal structure as calcium titanium oxide.  Perovskites have had a major impact on the solar industry in the last few years.)  This work demonstrated a far greater piezoelectric response than the best non-toxic material, BaTiO3.

After the original results, the researchers considered enhancements of incorporating the material into a ferroelectric polymer.  The results were surprising in that they doubled the power over their original material.  They have achieved a production of one volt for a crystal lattice contraction of 73 picometers.  There is much work yet to be done, but there is a long list of possible medical applications that could apply this material.

Work performed at Zi’an Jiaotong University is based on a soft dielectric material that creates voltage when bent [Ref. 2].  When certain materials are non-uniformly deformed, the strain gradient creates a separation of positive and negative ions, which develops a voltage across the material.  This action can be observed in many different dielectric materials.  Unfortunately, this effect is strongest in brittle ceramic materials.  The brittleness property makes the materials unsuitable for applications, like stretchable electronics.  The researchers have developed a process to add a layer of permanent negative charges within certain materials.  The material at rest, no stress, has no voltage between the top and bottom of the material.  If a bar of the material is clamped on both ends and the middle of the bar subject to a deforming process, high voltages can be developed.  The researchers reported measuring -5,723 Volts. 

The idea is that with new tools and the resulting knowledge obtained, it is possible to look at applications that are needed that currently require materials that are not suitable for the environment of the applications.  In many cases this environment is inside the human body. 

References:

  1. https://physicsworld.com/a/perovskites-perform-well-under-pressure/?utm_medium=email&utm_source=iop&utm_term=&utm_campaign=14259-41937&utm_content=Title%3A%20Perovskites%20perform%20well%20under%20pressure%20-%20research_updates
  2. https://physicsworld.com/a/new-flexoelectret-material-creates-thousands-of-volts-when-bent/?utm_medium=email&utm_source=iop&utm_term=&utm_campaign=14259-41937&utm_content=Title%3A%20New%20%E2%80%98flexoelectret%E2%80%99%20material%20creates%20thousands%20of%20volts%20when%20bent%20-%20research_updates

Posted in Uncategorized

Move over “Nano”, “Pico” is coming.

Nothing remains on top (or at the bottom) forever.  “Pico” is three orders of magnitude smaller than “nano”.  A rule-of-thumb is that to be able to manufacture or produce something, one must be able to measure to at least an order of magnitude smaller and more likely two orders-of-magnitude smaller.  Why?  In order to mass manufacture something, one needs to be able to make it so that there is consistency from one item to another.  If pieces go together, they need to have a tolerance so that the fit is acceptable. 

There is a story that circulated during the early application of robots that the robots could not do what assembling humans were doing.  It turned out that the parts being assembled had a slight variation in the centering of the pieces.  The human worker could make an adjustment, but the robot could not make that adjustment.  The tolerances were changed and the robots were “happy” and assembled the parts.  The tolerances had to be made smaller for the automated assembly to function properly. 

So what does that have to do with “pico”?  Please bear with me on getting to the point why “pico” is needed.  As the reader is probably aware, the semiconductor industry has been steadily decreasing the size of the size of the circuitry in the semiconductors.  The reduction in the minimum feature size dimension has been reducing at of rate of 0.7 every roughly 18 months.  (A 0.7 reduction in dimension results in an approximately 50% area reduction.)  This observation was first quantified by Gordon Moore of Intel and has become known as Moore’s Law.  The current “next” generation being developed for semiconductors is below 10nm. 

Semiconductor manufacturing consists of processing many 10s of layers, each layer contains different portions of the final circuitry.  The material on each of the layers is modified by various process that create the desired features.  The process involves coating a material, exposing the desired pattern, and then removing or adding specific materials. 

The features for the semiconductor are produced by illuminating a wafer with light that is modified by an imaging mask.  The mask contains the features that will produce the desired images.  The projection illuminates a resist to create a pattern that can be etched to remove the unwanted portion of the surface.  Typically, the mask has features at a magnification of the final image.  The optical system reduces the image to create the desired feature size. 

Roughly, the minimum “achievable” image spot definition produced by a light source is defined by the wavelength.  The wave nature of length produces diffraction patterns, which have alternating rings of light and dark.  Lord Rayleigh defined the minimum separation at which it is possible to identify separate points is when the center of the one spot falls into the first dark ring of the other spot.  The actual separation also includes a relationship to the diffraction limit of the angular illumination from the lens system.  In this discussion, we will use the wavelength divided by 4.  (This would require extremely good optics.)  The current semiconductor tools in widespread production employ 193nm wavelength sources.  With that assumption, it would appear that spots of 50nm could be produced.  But, there are other influences that degrade the images. A key factor is aberrations or distortions introduced by the lenses.  Another issue is that the images produced need to have vertical sidewalls and not the slope of two overlapping images.  Add to that the fact that resists do not produce smooth patterns at the nanoscale size.  The molecules of the resist cause roughness.  Then the actual lens system is also limited by the index of refraction of the lens material (for transmissive systems) and the numerical aperture, which is a function of the focus angle of the lens system.  The list of imperfections that degrade the final image.

How do we produce such small feature as are being done today?  There are many factors.  The application of immersion techniques to the 193nm systems has further reduced the feature size possible.  The development and application of mathematical techniques that evaluate the interference of light provides the ability to actually use features on the mask to prevent portions of adjacent features from being imaged.  (For a more in-depth understanding of these techniques, publications by Chris Mack are recommended.  See reference 1.)  These imaging systems, known as Lithography systems, are very large and quite expensive.  There are many engineering innovations that have become part of the existing Lithography systems.

The optics are the key to successful imaging.  Surface variations in the lens cause defects in the image on the surface.  The surface smoothness of a lens from the theoretical design is measured and the resultant number is called the Root Mean Squared (RMS) error.  RMS is basically an estimation of the deviation from the perfect design.  The current state-of-the-art for high resolution camera lenses is roughly 200nm RMS. 

As presented in the Keynote Session presentation at the Advanced Lithography Conference, the current RMS for the latest Lithography systems in production is less than 1nm.  In this presentation, it was stated that the value for the coming image sizes requires the Lithography systems’ RMS to be 50pm.  That is picometers.  One picometer is one thousandth of a nanometer.  [Ref. 2]  This talk referenced a slide from Winfried Kaiser that showed the equivalent of 50pm corresponds to variations across the length of Germany, across 850 Km, would need to be less than 100 micrometers. 

As the feature sizes continue to shrink, the variations in properties will be in the picometer range.  It will probably start out as a decimal point and a nanometer reference.  Nanometers were first referred to as micrometers with a decimal point before the numbers.  “Pico” is coming to nanotechnology.

References:

  1. Chris Mack has various explanations on optical techniques available at http://www.lighoguru.com
  2. Bernd Geh, EUVL – the natural evolution of optical microlithography, Advanced Lithography 2019, Conference 10957, Extreme Ultraviolet Lithography, Keynote Session, February 2019, San Jose, California. 

Posted in Uncategorized

The Challenges of Reproducible Research

Twice last year, April and November, this blog addressed fake science.  That challenge has not gone away based on recent news articles.  But if one assumes the research is accurately reported and the independent research has differing results, the question is: “How can that be explained?”

One of the first considerations needs to be the equipment employed.  Ideally the research report will include the type (make and model number) of equipment employed in the experiments.  Equipment model numbers would provide a means of determining the capability of equipment identified to provide both accuracy and precision of results.  One could assume that this should be sufficient to get good correlation between experimenters.

One example of experimental correlation that I have close knowledge about was between researchers in Houston, Texas and Boston, Massachusetts.  The experiments involved evaluating properties of graphene. A very detailed test was conducted in the Houston lab.  The samples were carefully packaged and shipped to Boston.  There was no correlation between the two researchers on the properties of the graphene.  The material was returned and reevaluated.  The results were different from the original tests.  Eventually, it was shown that the material had picked up contamination from the air during transit.  This led to a new round of testing and evaluations.

The surprise was that the results from testing by the different researchers were close but not identical.  So, another search was started.  Equipment model numbers were checked, and then serial numbers were checked and sent to the equipment supplier.  It turned out that during the production run, which was lasted a number of years, a slight improvement was made in the way the equipment evaluated the samples.  This was enough to change the resulting measurements at the nanoscale.  Problem solved.

This example points to the need not only for calibration but also working with other researchers to determine discrepancies in initial research results.  Reports need to include test procedures and equipment employed.  Ideally, it should include some information on calibration.

In many cases, we take calibration for granted.  How many people assume that their scales at home are accurate?  How many check oven temperatures?  The list goes on.  I have been in that group.  We replaced our old range/oven this past December.  The oven appeared to be working fine until Christmas cookies were baked.  They came out a bit overdone.  The manufacturer indicates in the instructions that it may take time to make slight modifications in how you bake.  I decided to check the actual temperatures.  I found that a setting of 350oF resulted in an oven temperature of 350oF.  However, a setting of 400oF resulted in an oven temperature of 425oF, which is enough difference to over bake the cookies.  Almost weekly visits over two months by the manufacturer’s service technician has resulted in replacement of all replaceable part and a number of recalibrations with no apparent change in performance.  After the first few weeks, I decided to do a more thorough test of the oven.  (My measurements agree with the technician’s better equipment to within a couple of degrees.)  What I found is that at 350oF the oven is solid.  At 360oF, the oven temperature is almost 395oF.  At higher temperatures, the difference between the oven setting and actual reading decreases to 25 degrees.  Between 450oF and 500oF the temperature difference decreases from a plus 25oF to a minus 5oF.  The point of this example is that assuming calibration is accurate even for common appliances is not something to be taken for granted.  Calibration must be done, and calibration for scientific equipment must be done regularly to ensure accuracy of results.

Disagreement among researchers in test results does not indicate there are errors in the experiments, but that the equipment could be out of calibration.  It is critical to verify calibration of equipment employed in research before claiming faulty research.

Posted in Uncategorized

Two-dimensional materials’ research opening new opportunities

There have been some interesting reports in the last year that indicate there is a growth in interest and a branching out in nanomaterials research.  In the pursuit of increasing the capability of better batteries, researchers are investigating many different ideas.  One on them if from a collaboration between University College London and the University of Chicago.  They reported on work to a magnesium battery. [Ref. 1] This effort is to produce batteries with materials other than Lithium is based on the concern that the technology is reaching a limit of its capabilities along with the potential fire causation due to short circuiting.  Magnesium batteries currently are being limited by the lack of inorganic materials that will work.  The researchers employed different techniques to produce an ordered 7nm magnesium particle.  What they found is that a disordered arrangement of 5nm particles has an ability to recharge more quickly.  It is thought that the disordered arrangement is what permits the recharging of the battery, which the arrangement of ordered crystals does not permit.  This opens up a new area of investigation for many types of materials.

Work at New York University Tandon School or Engineering and NYC Center for Neural Science on electrochemical sensors are based on graphene.  [Ref. 2] Their work on developing a graphene sensor with predictable properties focused on the atomic level.  The history of graphene applications show that large scale requirements are normally hindered by defects.  Defect-free graphene on a large scale has not been demonstrated.  It is known that the defects in graphene can change the properties of the material.  This work is directed at trying to understand the relationship between various defects and the graphene electrical properties.  The intent is to develop a means of predicting the capabilities of the sensor based on the placement and type of the defect.

Work at the University of Pennsylvania has been focused on tungsten ditelluride. [Ref. 3] The researchers think that this material can be tuned to have different properties.  Tungsten ditelluride is three atoms thick.  The projection is that slightly different configurations of the atoms will produce different properties.  This becomes another potential application of 2-D materials.

The Physics World “Breakthrough of the Year” award was given to Pablo Jarillo-Herrero of MIT. [Ref. 4] His work led to the discovery of “twistronics”, which is a method of fine-tuning various material properties by “twisting” (rotating) adjacent layers.  A team collaborating with MIT showed that adding electrons to the “twisted” material should allow them to produce a superconducting material.  There are a number of applications that theoretical physicists have indicated could be developed using this process to develop the material.

The University of Glasgow has developed a contact printing system that embeds silicon nanowires into flexible surfaces. [Ref. 5] Printing bottom-up zinc-oxide nanowires and top-down silicon nanowires.  The nanowires are 115nm in diameter and spaced 165nm apart.  While the size and spacing is not spectacular, the ability to print on flexible surfaces at these sizes is very interesting.

Work at the Tokyo Institute of Technology developed a molecular wire in the form of a metal electrode molecule metal electrode junction including a polyne doped with ruthenium. [Ref. 6] The key take-away is that this work is based on engineering the energy levels of the conducting orbitals of the atoms along the wire.

Based on these few examples, developments in nanomaterials, their properties, and applications are promising for this and the coming years.  As theoretical efforts continue to progress, the “discovery” of material properties will continue to open new areas of research.

References:

  1. https://www.rdmag.com/article/2019/01/disordered-magnesium-crystals-could-lead-better-batteries?et_cid=6573209&et_rid=658352741&et_cid=6573209&et_rid=658352741&linkid=Mobius_Link
  2. https://electroiq.com/2018/12/graphenes-magic-is-in-the-defects/
  3. https://www.rdmag.com/news/2017/02/versatile-2-d-material-grown-topological-electronic-states?cmpid=verticalcontent
  4. https://physicsworld.com/a/discovery-of-magic-angle-graphene-that-behaves-like-a-high-temperature-superconductor-is-physics-world-2018-breakthrough-of-the-year/?utm_medium=email&utm_source=iop&utm_term=&utm_campaign=14259-40795&utm_content=Title%3A%20Discovery%20of%20%E2%80%98magic-angle%20graphene%E2%80%99%20that%20behaves%20like%20a%20high-temperature%20superconductor%20is%20Physics%20World%202018%20Breakthrough%20of%20the%20Year%20%20-%20explore_more
  5. https://semiengineering.com/manufacturing-bits-sept-18/
  6. https://www.titech.ac.jp/english/news/2018/042199.html

Posted in Uncategorized

Challenging times ahead thanks to technology

Last month’s blog was about the challenges in determining real scientific research and reports that include falsified or erroneous conclusions.  It is difficult enough to understand results that are presented without complete coverage of the underlying premises.  There are professionals who have indicated that careful watching of a person’s gestures or mannerism during a presentation can provide evidence of inaccuracies.  Professionals are able to evaluate changes in voice tones that can provide

Or at least they used to be.  In an April 2018 article “Forging Voices and Faces: The Dangers of Audio and Video Fabrication” [Ref. 1], The author covers mentions a speech that President John F. Kennedy was to give the evening of November 22, 1963.  A company re-created the speech that was to be given with a synthetization of audio fragments of Kennedy’s actual voice.  There are currently a number of programs that can be employed to synthesize audio.

In the 1960s there were a number of efforts by the Russians to remove people, who were no longer in favor, from pictures.  At that time, it took massive computer power costing in excess of $50,000 and much labor to achieve the removal of a person.  [See Ref. 2 for a number of pictures showing the removal.]

Today, there are a number of way of programs that can be employed on relatively inexpensive desk computers that can make credible changes to photographs.  The Wall Street Journal is working to educate their journalists on identifying what are being called “deepfakes”. [Ref. 3] Somethings like, direction of shadows or changes of resolution within a picture can be obvious.  As the Wall Street Journal states: “Seeing isn’t believing anymore.  Deep-learning computer applications can now generate fake video and audio recordings that look strikingly real.” [Ref. 5].

One of the most recent articles on the impact of computer-generated capabilities is from the IEEE. [Ref. 5]  The focus is on the ability of artificial intelligence software to generate Digital doppelgangers of anyone.  Work at the University of Washington cited in the article shows how “fake” images can be created from available images on the internet.  In particular, the researchers chose to work with high resolution images of Barack Obama.  The researchers had a Neural net analyze millions of video frames to create elements of all facial mannerisms as he talked.

There are still areas that need improvement to do an actual replication of the person speaking because the superposition of facial features including the muscles that move when a person is speaking can not yet be accurately replicated when the person turns slightly.  But, that is just a matter of improvements in the techniques.

There are articles, which are intentionally not referenced, that show the techniques of using a Hollywood procedure of capturing the movements of an actor in a general manner.  Then taking the capture movement points and coupling those movements to another person.  It is a technique that is also used in animation.  It’s possible to photoshop the head of a person on to a look-alike body.  Any person can be inserted into the actions.  His or her voice can be created to enhance the believability of the video.  What happens when these techniques are available on personal computers?

Where does this lead to?  Reference 6 indicates that there are government officials that are concerned the next US presidential election could witness a number of fake videos and cause a serious disruption of the election.  We are losing the ability to have sources of information that can be trusted.  How does a civilization survive when viewed, heard, or read information has a strong probability of being monoligated?

References:

  1. https://spectrum.ieee.org/computing/software/forging-voices-and-faces-the-dangers-of-audio-and-video-fabrication
  2. https://en.wikipedia.org/wiki/Censorship_of_images_in_the_Soviet_Union
  3. https://www.telegraph.co.uk/technology/2018/11/15/journalists-wall-street-journal-taught-identify-deepfakes/
  4. https://www.wsj.com/articles/deepfake-videos-are-ruining-lives-is-democracy-next-1539595787?mod=hp_listb_pos1
  5. https://spectrum.ieee.org/tech-talk/robotics/artificial-intelligence/ai-creates-fake-obama
  6. https://www.washingtontimes.com/news/2018/dec/2/vladimir-putins-deep-fakes-threaten-us-elections/

Posted in Uncategorized

Faking Science

A portion of this topic was covered in the April 2018 blog, but enough additional material has surfaced that this information needs to be covered again.  The reproducibility issue covered in April is one thing.  The ability to slant results through the use of statistics [Ref. #1] has been known for a long time.  The omission of data points that don’t support the conclusions is another method.  It is always possible to come up with an argument that supports changing of results, but is that proper?

The National Institutes of Health Clinical Center Department of Bioethics has a brochure [Ref.2], which has 7 key steps for ethical development of research.  Two of these are Scientific Validity and Independent Review.  The impact on society is a key consideration for the overall research.  Medical research is directed at saving lives and improving the quality of life for impacted people.  Consider the following situations:

Professor Brian Wansink of Cornel University was considered a leading researcher in eating behavior.  He resigned earlier this year due to findings that he misreported research data, employed problematic statistical techniques, did not properly document and retain research results.  The main contention of his statistical work was that he employed one technique (p-hacking) that involves running statistical analyses until statistically significant results are obtained and the other (HARKing) is hypothesizing after the results are known. [Ref. 3]

The head line reads: “Harvard Call for Retraction of Dozens of Studies by Noted Cardiac Researcher.” [Ref.. 4]  Dr. Piero Anversa published results suggesting that damaged heart muscle could be regenerated with stem cells.  Although his work could not be replicated by independent researchers. There were numerous awards for clinical trials.  The questioning of his work resulted in more than 30 published papers that were in question.  The result was that an entire field of study developed by Dr. Anversa is called into question.  His institution, Brigham and Women’s Hospital – A Harvard medical School paid #10 million for research fraud. [Ref. 5]

Duke University had a researcher in the lab of a prominent pulmonary scientist arrested on charges of embezzlement.  [Ref. 6]  The investigation turned up some unusual things.  The end result was the 15 or the scientist’s papers were redacted.  It was claimed that the research in these papers had enabled Duke University to obtain over $200 million in grants.

Unfortunately, these are not isolated cases.  The onlineuniversities web site provides more details of the “10 Greatest Case of Fraud in university Research.” [Ref. 7]  It is worth a quick skim to see the areas of research and the impact on people.  Remember these sentences in the second paragraph?  The impact on society is a key consideration for the overall research.  Medical research is directed at saving lives and improving the quality of life for impacted people.

Other areas of scientific research are not immune from “interesting” shenanigans.  A recent article in the Washington Post [Ref. 8] contains a claim by a scientist that the oceanographic study recently released contains errors that increased the possibility of the project results.  The response was that the scientists were work quickly to create the report and may have included inadvertent mistakes.  Understandable.  However, the National Oceanic and Atmospheric Administration (NOAA) has refused to provide the research data, notes, etc., to Congress.  [Ref. 9] As a federal agency, NOAA receives its budget from Congress and Congress has oversight responsibility.

The last reference has a number of interesting observations. [Ref, 10]  A key point is that developing crises results in the need to investigate and understand the cause and impact of the developing crises.  There is a reference from Al Gore’s book quoting Upton Sinclair.  It is difficult to get a man to understand something when his salary depends upon his not understanding it.”

If we can’t believe that scientific research is driven by facts, hypothesis development and testing, and then valid conclusion based on reproducible experiments, how can we trust the actions presented as needed by the results?

References:

  1. How to Lie with Statistics, Darrell Huff, ISBN-13: 978-0393310726, ISBN-10: 0393310728
  2. https://bioethics.nih.gov/education/FNIH_BioethicsBrochure_WEB.PDF
  3. https://www.wsj.com/articles/a-cornell-scientists-downfall-1537915735
  4. https://www.nytimes.com/2018/10/15/health/piero-anversa-fraud-retractions.html
  5. https://www.thecrimson.com/article/2017/4/28/brigham-pays-fine-fraud-allegations/
  6. http://www.sciencemag.org/news/2016/09/whistleblower-sues-duke-claims-doctored-data-helped-win-200-million-grants
  7. https://www.onlineuniversities.com/blog/2012/02/the-10-greatest-cases-of-fraud-in-university-research/
  8. Scientists acknowledge errors in Study of Oceans, Austin American Statesman, Thursday, November 15, 2018, page A8
  9. http://intelligentuspolitics.com/noaa-refuses-to-provide-climate-research/
  10. Onward, climate soldiers, https://www.washingtontimes.com/news/2018/nov/13/science-loses-when-a-system-of-penalties-and-rewar/

 

Posted in Uncategorized

Nanotechnology and Electronics

The competition for nanotechnology applications in electronic circuitry is meeting manufacturing volume challenges, reliability, and cost.  Until the manufacturing volume can be demonstrated, the reliability can not be evaluated.  Cost will be a function of developing means of high-volume manufacturing.  This is still some time in the future.  If the challenges are engineering challenges, the solutions will be found with enough industry effort.  There are three areas where nanotechnology appears to be providing some indication of a promising future: flexible nanomaterial based circuitry, sensors, and graphene-based circuitry.

Flexible Nanowires:

Researchers from the University of Glasgow [Ref. #1] have developed a process that can print nanowires using silicon and zinc-oxide materials with average diameters of 115nm and spacing of 165nm.  This permits the production of flexible electronic circuits.  An advantage is that the circuitry can be produced over a large area.  The potential for this approach is to provide the underlying circuitry for form fitting and flexible circuits.  One example that immediately comes to mind is the creation of custom circuits to work on prosthetics.

Sensors:

One area of sensor development if the application of carbon nanotubes imbedded within fibers.  Nerve-like conductive composites are produced using an elector-deposition of functionalized carbon nanotubes [Ref. #2].  Their work has been embedded within a number of traditional fabrics.  The coatings range from n205nm to 750nm thick.  The initial work was employed to measure the pressure on the various parts of subject feet.  Their claim is that the application can produce superior results in a variety of medical situations where the force pattern is important in developing the issue or injury.

Another sensor application is the development of an electronic “nose”.  There has been numerous research papers in this topic.  One example [Fig. #3] describes their efforts to enable a device employing nanosensors to analyze various chemicals that are exposed to the device.  The device examines molecules in vapor.  The device is small (handheld) and is accurate to with 20% of a gas chromatograph, which costs significantly more and takes longer.  Applications that required expensive and/or time consuming measurements can be replaced by portable and inexpensive devices that provide rapid analysis.

Graphene based Circuitry:

One of the issues of mass production is to be able to achieve the volumes at an efficient cost, which implies that the circuitry must be produced cheaply and rapidly.  One of the issues is that any approach must compete with semiconductor5 manufacturing.  Through the use of lithography for patterning, billions of transistors can be produced very rapidly.  There has been a change in the previously continual progress of reducing cost as the dimensions of the devices are reduced.  This recently changed in that the challenges in making the nearly single digit nanometer devices has become more expensive.  The consequence of this is that the cost per transistor has stopped shrinking.

The use of 2D materials has been promising, but high-volume production is not been proven.  Work at imec [Re. #4] has focused on building 2-D devices and layering them to produce the desired properties.  The issues with the process are that it is a multi-step involving a number of different materials.  The projection is that material availability in volume is 10 years or more in the future.  The hidden issue is that defect-free 2-D materials are not producible in dimensions that are required for volume production.

Summary:

We are seeing the development of potential materials that can be employed to solve issues are not currently possible.  We are still in the materials development phase.  It will be a while before there is truly volume production of nanoscale electronic systems.

  1. https://semiengineering.com/manufacturing-bits-sept-18/
  2. https://www.nanowerk.com/nanotechnology-news2/newsid=50902.php
  3. https://www.sustainablebrands.com/news_and_views/chemistry_materials/sheila_shayon/electronic_nose_nanotechnology_we_can_analyze_safet
  4. https://semiengineering.com/can-graphene-be-mass-manufactured/

Posted in Uncategorized

Augmented Reality

Previously, I have mentioned how Augmented Reality (AR) could be employed to evaluate nanomaterial properties through being able to switch from 3-D images containing three different parameters to a different view with one or more of the parameters changed.  The real question is how might AR be beneficial to a large number of people.

One of the challenges of business is the necessity for meetings.  Being able to meet face-to-face enables a strong interaction among the people present.  Conference calls are okay but lack the ability to talk about figures and drawings.  Yes, it is possible to send out the information and hopefully have people move to the page that is being discussed.

WebEx and similar approaches are beneficial by enabling the presenter to move through a presentation with everyone observing the same specific item being discussed.  Yes, it is possible to add video so that individual can appear on a screen.  The issue with multiple little rectangular video boxes on the screen is that once a small number of participants is exceeded it is difficult to focus on the presentation.

I have experienced video conferences where the multiple video conferencing rooms are set up.  When this is done correctly, it is almost like siting across a table from the participants.  The issue with this approach is the requirement for dedicated facilities to enable the communications.

What is it about face-to-face meetings that make them better that the alternatives we have today?  Face-to-face meetings provide for the ability to observe how participants react during a meeting.  Facial changes provide a significant amount of information on how an individual is reacting to the presentation.  This is not possible under current virtual meeting alternatives – today.

Augmented Reality employs computer generated graphics into a real-world environment.  In most of these instances, the generated image is three dimensional.  This still requires processing power to achieve the desired images.

What if?  What if it were possible to have a camera taking 3-D images of a meeting participant and sending the data to a computer to create an augmented image of that person?  What if the resolution of that image were sufficient to observe facial expressions (on a 3-D image of the person)?  What if there were sufficient transmission capabilities to send and receive multiple images simultaneously?  What if there were sufficient processing power to resolve all the images being sent?

One can envision a future program that can insert meeting participants into empty chairs around a meeting table.  (Obviously, everyone would need to be wearing an AR headset.)  Physically present people would take a seat at the table.  The computer would provide AR images of the remote people and place them in empty seats.  Hands could be raised to ask questions.  There would be no difference between the physically present and the virtually present people.  Would this approach be as good as an in-person meeting?  Not quite.  The ability to have side discussions would still be available.  With sufficient resolution, the reactions of the attendees would be recognizable.  What would be missing is the side conversations prior to and after the meeting.

Is this possible today?  Not yet.  There are a number of issues and the technology is still in the early stage.  If one considers the data requirements.  If the image is 3-D, the data required is at least twice for a typical video transmission.  The resolution needs to be higher, which increase the data to be transmitted.  In some locations, the upload speed is more than a factor of 10 slower than download speed.  If there are 5 or 10 people in one location, the system will probably not perform well.  So, higher upload speeds will be required.  The computational power required is significant.  If the people are rendered (virtually), the refresh rate needs to be many times a second.  Again, higher data transmission rates required.  The computer that creates the virtual image must be refreshing the image many times a second.  This needs to be done for each of the participants.  Does this mean multiple computers with multiple cores?  Probably.  My guess is that a system with the capability described, would have multiple computers.  Each computer could handle set number of virtual people.  If that number was exceeded, a second computer would be brought on line.  The possibilities are endless.  We need for a company to come forward with a augmented reality meeting product.  Consider that program to be something like WebEx on super-steroids.

Posted in Uncategorized

Modeling and Nanotechnology

Modeling is the development of a mathematical representation of an actual or proposed set (group) of interactions that can be employed to predict the functioning of the set (group) under possible conditions.  Interest in modeling has grown since the late 1970 and early 1980s coincident with the development of more and more powerful computers.  The size of models has grown and the additional sophistication of the evaluation has increased.

What does this have to do with nanotechnology?  It has been demonstrated that physical material properties change as the size of the material decreases into the low double-digit nanometers.  The application of gold nanoparticles to produce the color of red in stained glass windows is an example of usage that is hundred of years old.  The effect of sliver nanoparticles on bacteria in another example.  Is it possible to develop a model that will predict the behavior of materials for any size or shape?  The answer is: “Yes, BUT”.

One instance that raises a question about modeling goes back to the early 1960s.  Professor Edward Lorenz (MIT) employed a computer simulation of weather patterns.  Supposedly, he left to get a cup of coffee and when he returned, he was very surprised.  He was rerunning a simulation he had previously made.  Except, this time he rounded 0.506127 to 0.506.  This small change in value changed the entire pattern of the projected two months of weather results. [Ref. #1]  This result has become know as the butterfly effect, which is used to refer to a very tiny occurrence can change the course of what follows.  This terminology as used in chaos theory, represents the dependence sensitivity of the initial modeling conditions that can produce significant changes in the resultant projections. [Ref. #2]

In a quote I attribute to Professor Robert Shannon of Texas A&M University, he said: “All models are wrong!  Some are useful.”  Once the natural occurring probability impacts the occurrence of the data employed, the results are uncertain.  The interesting thought is that complex models run on 16-bit computers could have completely different results from the same model run on a 64-bit computer.  In addition to this difference in precision, the initial starting conditions are important.  In many models, the initial conditions are left empty or zero and the model is run to take remove the “initialization” bias.  Obviously, models, like weather forecasting, need initialization data.  In the weather example, there are a number of tropical storm forecasting models that are compared.  Each of the model’s projection is actually based on a number of runs of that data to determine an average or best fit.  In comparisons, the European model, which has more sensors than the US model, tends to be a bit more accurate.  Trying to predict smaller effects is more difficult because the minor change in variables can cause greater effects when trying to restrict the weather impact to smaller regions.

So, the question is what kind of results can be anticipated with modeling on nanotechnology.  One example is the recently identified Schwarzite carbon structure. [Ref. #3]    Material with these properties was predicted as early as the 1880s, but no one was able to create it to validate the theoretical (modeling) results.  Now that it has been created, the predicted properties can be tested and evaluated.  Once the material is in hand, then actual testing can be done.  Predictions can point out directions to follow, but do not guarantee that the material will have the specific, predicted properties.

These was recent article [Ref. #4] that implies computer simulation will be used to develop the laws of nature and end much work being done in theoretical physics.  While there might be benefits gained and some people are indicating that artificial intelligence (AI) will provide interesting breakthroughs, these “discoveries” will still need to be proven.

One thing that modeling can not do is to find surprises.  University of Wisconsin physicists constructed a 2-D form of tungsten-ditelluride [Ref. #5] that has unanticipated properties, including “spontaneous electrical polarization” from combining two mono-layers of the material.  Until models can be constructed that contain all the variables and correct relationships among particles, models will be “wrong” but useful.

 

References:

  1. https://www.technologyreview.com/s/422809/when-the-butterfly-effect-took-flight/
  2. https://dzone.com/articles/what-role-does-the-butterfly-effect-play-in-tech
  3. https://www.graphene-info.com/schwarzite-carbon-structures-identified? utm_source=feedburner&utm_medium=email&utm_campaign=Feed%3A+graphene-info+%28Graphene-info+%3A+Graphene+news+and+resources%29
  4. https://www.quantamagazine.org/the-end-of-theoretical-physics-as-we-know-it-20180827/
  5. https://electroiq.com/2018/08/for‐uw‐physicists‐the‐2‐d‐form‐of‐tungsten‐ditelluride‐is‐full‐of‐surprises/

Posted in Uncategorized