Changes in Nanotechnology Perspectives

There are announcements about new findings or new concepts in nanotechnology that may not appear a “big” changes.  Typically, the definition of nanomaterials is: For a material to be called nanomaterial, its size must be smaller than 100 nm in at least one of the dimensions, or a nanomaterial should consist for 50 % or more of particles having a size between 1 nm-100 nm.  A comment in the latest imec magazine issue [Ref. 1] adds a little more clarity: “What makes these nanoparticles special, is that their properties cannot simply be derived from their bulk counterparts due to quantum physics effects. There is a direct effect of size on several physical and chemical properties.

“Why the hazardous properties can be different is because the charge density on the surface is different. Since nanoparticles are smaller than bigger particles, the surface is more curved and the charge density is larger. Additionally, their free energy is larger, which can change their catalytic activity. Finally, the number of atoms touching the skin – the first layer of contact – as a percentage is larger than with larger particles. Some of the nanomaterial properties change in a predictable way, others in a threshold way.” [Ref. 2] But article also states: “What is important for us is the size threshold, so that similar materials can be treated the same way.”

As long time readers of this blog may remember, size makes a large difference in many different ways.  The transition of aluminum nanoparticles as a size boundary is crossed.  The ability of gold nanoparticles to change colors based on size.  The ability of silver to kill bacteria as size decreases past a threshold value.    The effects can be different even for nanomaterials in the same periodic group,

The above quotes are from an article published by imec in their monthly magazine describing an effort to create understanding of nanotechnology safety in Europe among their semiconductor manufacturing workers.  The US has funded two distinct efforts on nanotechnology safety to provide an education source for both workers and students.  The educational aspects of nanotechnology safety are addressed in an Introductory Course and an Advanced Course funded by NSF at the Texas State University. [Ref. 3]  This effort has produced a textbook on nanotechnology safety. [Ref. 4]  OSHA funded an earlier effort at Rice University that developed an eight-hour training course for workers in various industries. [Ref. 5]  (Disclaimer: the author of the blog was involved in the three items referenced immediately above.)

There is new modeling work that describes the ability to create new materials that have unusual properties.  The issue is that developing a model and manufacturing the modeled structure are not straightforward.  Next month, this blog is planning covering the latest information on the modeling efforts.

Reference:

  1. https://www.imec-int.com/en/imec-magazine/imec-magazine-april-2018/assessing-nanorisks-in-the-semiconductor-industry
  2. Quote by Dimiter Prodanov in Ref. 1.
  3. http://nsf-nue-nanotra.engineering.txstate.edu/curriculum.html
  4. Nano-Safety, Dominick Fazarro, Walt Trybula, Jitendra Tate, Craig Hanks, De Gruyter Publisher 2017, ISBN 978-3-11-037375-2
  5. “INTRODUCTION TO NANOMATERIALS AND OCCUPATIONAL HEALTH” available at https://www.osha.gov/dte/grant_materials/fy10/sh-21008-10/1-introduction.pptx

 

Posted in Uncategorized

Medical Nano

The development of nanotechnology in medicine is a longer-term process than nanotechnology in general.  The reason is that the application of any technology, device, or medicine to humans has an involved process with many steps that require a long time to demonstrate the ability to pass all the various regulations.

This month’s blog will look at developments in the last few years in three areas: 1) cancer treatment; 2) application impacting the heart; and, 3) the eye.

Cancer treatment has been a key research area since before 2000.  Initial work involved the use of attaching gold nanoparticles to certain types to viruses.  Caner cells are hungry and will devour various types of viruses.  By inserting gold nanoparticles into preferred viruses, the cancer cells would grab the them and try to consume them.  By illuminating with IR radiation, the site with a concentration of viruses with gold encapsulating inside then, the temperature of the virus and cancer cell can be raised to a high enough temperature to kill the cancer cells.  In a similar approach, encapsulating carbon nanotubes in the viruses, exposure to RF waves converts the radiation into heat efficiently and kills the cancer cells.  Where are we today in early 2018?

In January 2018, the Seoul National University Hospital indicated that it has developed a new magnetic nanoparticle that can be employed to improve the therapeutic benefits for the treatment of cancer.  The application of a magnetic field causes the nanoparticles to heat, which causes the cancer cells to be killed.  The claimed benefits are that the treatment can be focused to the specific cancer cells and basically leave the surround healthy cells without damage.  The claim in the latest work is that the magnesium nanoparticles developed are able to heat much faster that previously developed ones, which minimized the amount of energy to heat the nanoparticles (temperatures a high at 50C could be required) that is required.  An additional advantage is that the nanoparticles contain identical components as a previously FDA approved iron oxide nanoparticle. [Ref. 1]

As I was proofing this blog, I received an article from the R&D Magazine that provides a history of the direction of cancer treatment.  [Ref. 2] “Today there are about 20,000 papers written on this topic each year. A lot of researchers are starting to work in this area and we (the National Cancer Institute – NCI) are receiving large number of grant applications concerning the use of nanomaterials and nano-devices in novel cancer interventions. In last three years there has been two FDA approvals of new nanoparticle-based treatments and a multitude of clinical trials.”  I recommend reading it.

Addressing heart issues through the application of nanotechnology.  A nanoparticle developed by University of Michigan researchers could be the key to a targeted therapy for cardiac arrhythmia, which impacts 4 million Americans each year with a resulting 130,000 deaths.  Cardiac arrhythmia is a condition that causes the heart to beat erratically and can lead to heart attack and stroke.  Currently, the disease is treated with drugs, with the possibility of serious side effects.  Cardiac ablation is also employed, but the effect of a high powered laser damages surrounding cells.  The current work, not yet done on humans, kills the cells causing the problem without damaging the surrounding cells. [Ref. 3]

There is also work being done in cryogenically freezing and rewarming sections of heart tissue for the first time, in an advance that could pave the way for organs to be stored for months or years.  By infusing tissue with magnetic nanoparticles, frozen tissues can be heated in a excited magnetic field, which generates a rapid and uniform burst of heat.  Most work on warming tissue samples have run into problems with the tissues shattering or cracking. If this can be fully developed, the availability of transplanted organs becomes much better.  Donor organs start to die as sooon as the organ is cut off from the blood supply.  Possibly as much as 60% of the possible donor organs are discarded because of the four hour effective ice cooling limit of organs.  [Ref. 4]

Pioneering nanotechnology research has applications for cardiovascular diseases.  Samuel Wickline, MD, the founding director of the USF Health Heart Institute, has been harnessing nanotechnology for molecular imaging and targeted treatments. They have developed nanostructures that can carry drugs or exist as therapeutic agents themselves against various types of inflammatory diseases, including, cancer, cardiovascular disease, arthritis and even infectious diseases like HIV. [Ref 5]

Treatment of the eye has a number of needs.  Typically, less than 5% of the medicine dose applied as drops actually penetrates the eye – the majority of the dose will be washed off the cornea by tear fluid and lost.  Professor Vitaliy Khutoryanskiy team has developed novel nanoparticles that could attach to the cornea and resist the wash out effect for an extended period of time. If these nanoparticles are loaded with a drug, their longer attachment to the cornea will ensure more medicine penetrates the eye and improves drop treatment.  The research could also pave the way for new treatments of currently incurable eye-disorders such as Age-related Macular Degeneration (AMD) – the leading cause of visual impairment with around 500,000 sufferers in the UK.  While there is no cure for AMD, experts think its progression could be slowed by injections of medicines into the eye.  This new development could provide a more effective solution through the insertion of drug-loaded nanoparticles. [Ref. 6]

A coming generation of retinal implants that fit entirely inside the eye will use nanoscale electronic components to dramatically improve vision quality for the wearer, according to two research teams developing such devices.  Current retinal prostheses, such as Second Sight’s Argus II, restore only limited and fuzzy vision to individuals blinded by degenerative eye disease. Wearers can typically distinguish light from dark and make out shapes and outlines of objects, but not much more.  The Argus II contains an array of 60 electrodes, akin to 60 pixels, that are implanted behind the retina to stimulate the remaining healthy cells. The implant is connected to a camera, worn on the side of the head, that relays a video feed. [Ref. 7]

The above descriptions are only the tip of an iceberg.  There a much work being done around the world.  There have been an interesting series of articles on the development of various medical technologies that are joint university efforts with the teaming of both US and Chinese universities.

References:

  1. http://www.koreabiomed.com/news/articleView.html?idxno=2283
  2. https://www.rdmag.com/article/2018/02/nanoparticle-based-cancer-treatment-look-its-origins-and-whats-next?et_cid=6275157&et_rid=658352741&location=top&et_cid=6275157&et_rid=658352741&linkid=content
  3. http://ns.umich.edu/new/releases/23249-nanotechnology-could-spur-new-heart-treatment
  4. https://www.theguardian.com/science/2017/mar/01/heart-tissue-cryogenics-breakthrough-gives-hope-for-transplant-patients
  5. https://hscweb3.hsc.usf.edu/blog/2017/01/20/pioneering-nanotechnology-research-applications-cardiovascular-diseases/
  6. https://www.nanowerk.com/nanotechnology-news/newsid=37649.php
  7. https://www.technologyreview.com/s/508041/vision-restoring-implants-that-fit-inside-the-eye/

Posted in Uncategorized

Is Nano too large?

Are we starting to see device developments at the atomic level?  During 2017, there have been many stories on graphene and other two-dimensional materials.  Various companies have started developing production capabilities.  Yes, graphene and similar material are only one atom thick.  Does this make them at the atomic level?  In one dimension, the answer is yes.  The other dimensions are much larger.  What we are starting to see is the development of devices that combine different two-dimensional materials to produce interesting devices.

A team consisting of University of Texas – Austin researchers collaborating with Peking University researchers has develop a very thin and very dense memory. [Reference 1]  Previously, semiconductor fabrication has separate areas on a device for computing circuity and the memory storage.  The researchers have called the devices “atomristors” to emphasize their work should improve on the capabilities of memristors.  Their technical paper is available through reference #2.  The increase in density is due to the application of multiple two-dimensional materials.  The entire memory cell is less than 2 nanometers thick.  While they did not mention this fact in the general publications, these devices should be faster and require reduced power for comparable semiconductor devices performing the same functions.

Memristors are not disappearing.  They are a type of Re(sistive)RAM, which was projected to replace NAND.  Memristors work by changing the resistance of the material.  The use of the memristors is different in that the change in the material in not necessarily binary but can be considered analog.  Of course, all the semiconductors are based upon on and off states.  So, to fully use the capabilities it requires something that is more analog based.  As mentioned last year, there are solid-state vacuum tubes, which are analog.  Maybe there is some interesting possibilities that can employ both types of devices.

As the speed of calculations increase, the issue becomes the time lag in getting signals across the semiconductor device.  It might seem strange that there is a signal lag across the such a small dimension but that is actually the case.  Researchers are looking at using optical fiber to transmit the signals.  This creates other complexity for the circuitry in converting signals on both ends of the transmissions.  As semiconductors dimensions continue to shrink, the size of key features will be well below 10 nanometer.  There are issues with making these small features in mass production, the size is below any existing light source for lithography to create the patterns,  The semiconductor industry has been creative in the development of sources that produce features well below the Rayleigh limit (wavelength/Smilie: 8).

Current tools use immersion lithography to create the majority of very fine images employed today.  This process requires the use of multiple masks per individual layer.  This is an expensive process.  The application of EUV (13.5nm) has the potential for smaller features.  It is anticipated the EUV will be coming into semiconductor manufacturing in a major way.  The challenge is the production of features that are less than 5nm.  If the images need to have a 10% range of feature sizes, that means controlling the features to 0.5nm (or 5 Angstroms).

We will be seeing more experimental tools that can measure/image features with accuracies that are much less than 1nm.  It should be an interesting year.

 

References

  1. https://news.utexas.edu/2018/01/17/ultra-thin-memory-storage-device-for-more-powerful-computing
  2. http://pubs.acs.org/doi/10.1021/acs.nanolett.7b04342

Posted in Uncategorized

Coming Attractions?

It is always a challenge to write an end-of-year blog.  The question is what to focus on. Highlights of 2017? Or possible coming items in 2018?  Or maybe some of both.  There has been some expansion of graphene manufacturing capabilities and some consolidation of companies.  The advances in graphene based electronics is moving slowly, but it is moving forward.  There are a number of applications that are being developed in medicine.  Probably the most interesting development in graphene is the possibility to create chemical and biological sensors based on graphene electronics.  [Ref. 1]

The concept of Atomically Precise Manufacturing or APM has been around for some time.  Each advance of manufacturing requires an ability to measure precisely something that is at least an order of magnitude that what we are attempting to measure.  A measuring device that could only measure to 10 centimeters is useless for working to make something that needed the accuracy of 1 or 2 millimeters.   There is also a need to improve the materials and the manufacturing process.

The following examples are from John Randall’s blog.  [Reference 2] The Romans developed ball bearings that were employed in their advanced warship technology.  The best material available for their manufacture incorporate wood.  About 2000 years later, metallurgy has improved enough to provide better materials and the manufacturing precision to create metal spheres with tight enough tolerances to be used as ball bearings.  Yes, lead spheres were used at an earlier date for weapons.  The key to a loose tolerance sphere was to build a tower high enough that the droplets of lead would harden before hitting the surface below.  Not exactly a precision manufacturing process.

A second example is the steam engine.  The first example of it has been attributed to Heron of Alexandria in about 100 AD.  It was almost 1,600 years later that English blacksmiths created tight enough tolerances to make steam engines that could actually be made in quantity and accomplish work.  Human ingenuity is an important part of the creation of the ideas, but without tools to manufacture and verify the product, the ideas remains only an idea.

As we move into smaller and smaller dimensions, the limiting factor becomes the atom.  We can see the interactions of atoms among groups of them.  In research that is coming, researchers are able to observe how electrons can move among various atoms.  This work is still in the early stage of development and also is of extremely short duration.  Consequently, our equipment must not only measure very minute changes in atomic properties but also those observations must happen over an extremely short interval.  I anticipate that there will be further development of novel measurement techniques in 2018.  Until we can fully understand the interactions among the atoms that constitute a molecule of material, we can not harness the interactions to create superior material.

Here is wishing all a healthy and prosperous New Year.  I am hopeful that we will see some interesting developments in the ability to observe the atomic scale of materials.

 

References:

  1. https://spectrum.ieee.org/nanoclast/semiconductors/devices/electronic-noise-in-graphenebased-sensors-reduced-and-sensitivity-increased
  2. https://www.zyvexlabs.com/importance-manufacturing-precision/

Posted in Uncategorized

Semiconductors and Nanotechnology

There is a rule developed by Gordon Moore that projected the increase of density of semiconductors.  For many years the path “Moore’s Law” predicted (a doubling of density roughly every 18 months) has been followed.   The driver for this pattern was increased computing power for more and more complex operations.  For a number of years, the decreasing size was identified by the node, which is based on the smallest half-pitch.  (The half-pitch is one half of the minimum spacing between adjacent smallest metal lines.)  In the 1990s, the node designation changed from the minimum spacing due to technology challenges.  [Cf. Bill Arnold’s article for a in depth presentation. Ref. #1].

Once the material sizing requirements moves into the nano realm, there can be issues.  Under 50nm line widths, there can be differences in the conductivity of interconnect lines.  Copper conductors in this size range can have changes in resistance due to crystalline structures with grain boundaries causing conductance variations from line to line.  That is only one of many possible problems.  The chart below is adapted from the wikichip reference [Ref. #2] with the year added that a particular node is scheduled for volume production.

wikichip-org-technology-node

Table from wikichip.org [Ref. #2]

While production is currently in the size range where some effects of nanomaterials can occur, more impact is coming shortly.  The semiconductor industry is working on many different options including three-D stacking and innovative designs for transistors.  However, the projection is still to go to smaller and smaller sizes with the increasing challenges.  The last two blogs have provided some thoughts on the challenges.

There are numerous “surprises” with nanomaterials.  The fact that non-magnetic material can have magnetic properties at 13 atoms is interesting.  I am unaware of anyone who is investigating that property in incorporate in an application.  Transition metal in five different  states: as hydrated atom; metal complexed in a small protein; metal adsorbed to surface on 1nm mineral particle; metal adsorbed to surface of 20nm particle; the same except to a 200nm particle. [Ref. #3]  In addition, size may matter for crystal orientation preferences.  For example, CeO2 < 10nm prefers being a truncated octahedron with {100} and {111} faces.  CeO2 > 10nm shifts toward {111} octahedron. [Ref. #4]

So, what is going to be done?  There are many different portions to solving the problem.  Consider the time it takes a signal to cross from one side of a semiconductor chip to the other side.  How can the timing be maintained?    One possibility is to use light.  This requires some type of light guide on the semiconductor.  Research in this area has been ongoing for many years.  Recently there have been some initial publications of employing two dimensional material, like graphene, to create tunable gratings on the nano scale.  Is that where semiconductors are going?  Not in the near future, but it is one possible avenue of research for smaller and faster circuitry.

One item that has not been mentioned is the fact that at small scale, circuitry tends to increase the amount of energy lost, especially as heat.  As the computing power increases, the amount of heat generated increases.  As the amount of power required increases, the size of the power supply increases.  That makes portable electronics a problem.  While there is work ongoing to improve the capacity of batteries, the improvements have not experienced any breakthroughs.  A number of published articles have demonstrated advances by carefully controlling the nanomaterials employed in the battery.

The above is just a few of the potential challenges for the future.  The manufacturing challenges have not been mentioned, but they are substantial.  The development of increased capability electronics will witness learning more about and how to properly incorporate nanomaterials in a manner that improves performance and decreases the proportional power requirements.

References:

  1. https://spectrum.ieee.org/semiconductors/design/shrinking-possibilities
  2. https://en.wikichip.org/wiki/technology_node
  3. Hochella, Michael F. Jr.. Nanogeoscience: From Origins to Cutting Edge Applications. December 2008 issue. Vol. 4, pp. 373-379.
  4. Waychunas, Glenn A., Hengzhong Zhang. Structure, Chemistry, and Properties of Mineral Nanoparticles.  December 2008 issue. Elements.  4, pp381-387.

 

Posted in Uncategorized

Nano Material Properties and the need for multi-dimensional representations

Material properties are “well” defined.  If one goes to a reference source, the various properties of a pure material can be found.  For example, the atomic number of Lithium is 3.  It has an atomic weight of 6.941.  Its specific gravity is 0.534, with a melting point of 180.5C, and a boiling point of 1,347C.  Similarly, the atomic number of gold is 79 with an atomic weight of 196.9665.  Its specific gravity is 19.32 (20C), with a melting point of 1,064 and a boiling point of 2807C.

From last month’s blog, the “bulk” material properties may not represent the properties of nanoparticles of the material.  Consequently, there is a need to consider various individual isotopes of the materials that are being investigated.  The chart below {Ref. #1] shows the change in melting point as the size of the gold nanoparticle diminishes.  (Other examples are available in my September 14th, 2013 blog.)  The melting point of a number of materials starts to decrease when the particle size is near 50nm. Blog_1710_pix1What is the effect of increasing pressure?  Will it increase or decrease the melting point and/or the boiling point?  The answer is not initially obvious.  The answer is dependent on whether the material becomes more dense or less dense at higher temperatures.   If the material requires more space as it melts, the higher pressure increases the melting point.  If the material requires less space as it melts (think water), then higher pressure decreases the melting point.  The boiling point typically has an inverse relation with vapor pressure of the liquid and a positive relationship with atmospheric pressure.

Blog_1710_pix2The figure above [Ref. #2] shows the relationship of the boiling point of water to pressure.  (Consequently, the need for pressure cookers at higher altitudes for proper cooking temperatures.)

The figure above is an excellent visual representation of the impact of altitude (pressure) on the boiling point of water.  Similar graphs could be done for gold or lithium.  But how can one visualize the impact of the change in size at the nano realm.  Does there need to be another chart for 10nm gold and another for 15nm gold, etc.  Or would it be better to have a single pressure point and look at the change in melting point, which is the first figure.  Would it then be necessary to have a multitude of curves at various particle sizes?  Is it possible to develop a three-dimensional graph that has temperature, pressure, and size as the axes?  That could be done.

What happens when the other properties of the materials need to be considered.  Do the electrical properties change as the material becomes smaller?  Yes.  Do the magnetic properties change?  One could indicate no, but that would not explain the magnetic moment of 13 atoms of silver.  Does the ability of the material to interact with other materials change with size?  Again, the answer is yes.  Can the material change shapes?  Certain ones can.  The list of differences at the nano realm is very large.

How should this be handled?  There is a need to define key characteristics of nanomaterials that have a strong impact on the performance of the materials.  (Do we also do this for each of the predominant isotopes?)  Initially this effort needs to be established at 50, 45, 40, 35, 30, 25, 20 nanometers.  Below 20nm, it should be done at 2 nanometer intervals.  It is a lot of work, but to fully understand and apply the nanomaterials, the information is needed.  Each time new characteristics are determined, that data needs to incorporated into the available data.

The objection could be that there is no means of using all the multiple charts that would be developed.  The development of an effort like this can apply the rapidly developing field of augmented reality.  The ability to flip through various three-dimensional graphs to observe trends is easier if the data can be projected, rotated, and translated.   Visualization is a powerful tool.    We need a lot more data on nanomaterials and a means of using the data.

References:

  1. http://en.wikipedia.org/wiki/Melting-point_depression
  2. http://chemed.chem.purdue.edu/genchem/topicreview/bp/ch14/melting.php

 

Posted in Uncategorized

Are we missing the important capability of nanomaterials?

Particles with a dimension that is 100nm or less in size are considered nanomaterials.  But, size is not what provides the unique nanomaterial properties that are being observed. Consider the following:  80nm aluminum particles are dangerous as a possible inhalant.  30nm aluminum particles are very reactive (explosive) when they come in contact with air/oxygen.  70nm gold particles are fine dust, but below 20nm, these particles when added to glass change the color of the glass.  Just because a particle is less than 100nm does not give it special properties just because it is less than 100nm.

There are other interesting changes in behavior in the sub-100nm range that can impact how materials behave.  One example is that the adhesion forces of particles to surfaces change as the particles get smaller.  Below roughly 70nm, van der Waals forces become the dominate adhesion mechanism.  Most changes in material property and behavior start to occur somewhere in the 30nm to 70nm size region.  (Part of this reason for change is that the size of a particle diminishes into the region where a significant portion of the atoms have “access” to the particle’s surface with the resultant increased opportunity to react with other material.)

Currently, research in the nanotechnology realm work with basic elements and combinations of nano-scaled materials, which has created some materials with interesting properties.  Carbon has been the most researched with single walled nanotubes and multi-walled nanotubes having a lot of early interest.  The ability of the carbon nanoteubes to increase strength of other materials with a decrease in weight has been utilized in automotive and sports industries to name a couple areas.  Depending on the chirality of the carbon nanotubes, the resultant material can be either conducting or semi-conducting.  “Unrolling“ a single walled carbon nanotube results in the material called graphene, which is called a “two-dimensional” material due to being only one atom thick.  It is conductive.  Contaminating the surface with oxygen produces a material called graphane, which is non-conducting (different name, different properties – graphane not graphene). [Ref. 1, 2, & 3]

What if we are not considering all the possibilities?

The current published research focuses on using various bulk materials to develop experiments and find the new properties of the nanomaterial.  But, what if, nano-scale material is also different from bulk material due to the various isotopes of the material?

          Bulk material containing various isotopes

Bulk material consists of various isotopes of the specific element(s) in a ratio that has been determined through various techniques and has been quantified. [Ref. 4]  In general, one makes an assumption that the nano-scale material has the same isotope ratios as the bulk.  One also assume that the different isotopes of the bulk material have identical properties when present together.  Is it possible that individual isotopes are different from the bulk containing various isotopes?  Is this possible?

One example that shows there is a different in isotopes is uranium.  A specific form of the element (isotope) uranium is 235U, which makes up 0.07% of typical uranium in the mined ore.  238U is the predominant form on the element and has a half-life of 4.5 billion years, while 235U is more reactive and can be split to produce energy. [Ref. 5]  The isotope that is useful, 235U, is separated from other isotopes of that material.  It might be expensive and challenging, but if a specific isotope is useful, it will be obtained.

Another example of different properties of isotopes involves water.  A water molecule consists of two hydrogen atoms and one oxygen atom.  However, there are other forms (isotopes) of hydrogen that have one or two extra neutrons.  Water molecules that consist of oxygen and deuterium (hydrogen with one extra neutron) are called heavy-water and are employed in damping nuclear radiation.  So, one form of hydrogen has properties that the other does not.

It can be stated that in this case, the extra neutron causes a significant percentage increase in the atom’s mass.  Work has been done that states the impact of additional neutrons are the greatest on the elements with the lowest mass.  Some projections imply that the additional of extra neutrons does not have a significant, if any, impact on the material properties.  Is this true?  What about 235U?

Consider Lithium with two stable isotopes, 6Litium and 7Lithium, with 7Lithium accounting for over 92% of the material.  There are also a number of short lived lithium isotopes. [Ref. 6 & 7]  It is known that 6Lithium has a greater affinity than 7Lithium for the element mercury.  This fact is used in the separation of the two isotopes of Lithium.  How is this possible if the extra neutron does not change the material properties?  Maybe it does!

Is it possible that we need to think about research that works with specific isotopes of “common” materials?  If various isotopes have different reactions with other materials, is it possible that “lumping” all isotopes of an element into an experiment actually degrades the performance that a single isotope might have? At a minimum, research that works with specific isotopes of “common” materials should clearly state the isotopes used, including any percentage of other isotopes that may be present, which may impact performance.

          Material purity

There are various levels of material purity that can be purchased.  Very high purity copper can be obtained that at 99.9999% pure.  That is one part per million pure material.  Other materials are not available in purities of greater than 99.99% and some are not even close to 99%.

It is known that doping of semiconductors with a very small percentage of different elements can change the properties of the combined material.  Doping in silicon (semiconductors to increase the charge carrier concentration) can range from lightly doped (parts per billions) to heavily doped (parts per thousands) of the doping material.  Depending on the materials employed, other effects besides carrier concentrations can be impacted.  Lithium can be employed for increasing the resistance of solar material (solar cells) to the sun’s radiation.

Has anyone examined the percentage impurity of isotopes in common materials, like carbon or copper?  In order to conduct that experiment it would be necessary to have pure material and then add impurities.  How difficult is that?  Consider that conventional computer hard drives require more than 1 million atoms per bit and over ½ billion atoms per byte!  Consider a 2µm copper sphere.  Calculations yield that it should have 3.56 x 1011 atoms.  If the material is six 9s pure, it contains 356,000 contaminant atoms! [Ref. 8 & 9] Do we really know the true properties of materials?

When the materials are in the nano-scale region, the total quantity of atoms is smaller.  A 30nm aluminum particle has roughly 850,000 atoms in it.  “Super pure” aluminum can be as much as five 9s pure.  That would still leave 9 contaminant atoms.  Is that enough to modify material properties?

          Material isotope homogeneity

Another question is whether anyone has worked with a pure isotope of common materials.  Granted that there are techniques for separating the isotopes, e.g., uranium, which can produce high purity materials.  But, when the desired isotope exists as a very small percentage of the total material, it takes many passes through a process to achieve the desired concentration and that concentration is not 100%.  It may be enough to be effective, but it is not 100%.

Copper has 29 isotopes with the two predominant being 63Cu (69%) and 65Cu (30.8%).  That implies that the pure copper one can acquire will typically be 69% of one type and 31% of the other.  Aluminum is interesting in the that for all practical purposes, 100% of aluminum is 27Al.  So any changes to the properties would be due to contaminants and not isotopes of aluminum.

Thoughts

If we have found that different isotopes of materials may have different properties in the bulk, is it not reasonable to anticipate that there will be different properties in the nano realm?  Maybe we should start to investigate the properties of various isotopes of nanomaterials?  Are we missing some potentially important properties when we do not investigate the isotopes on various nanomaterials?  Do we have the concept of nanotechnology research mis-focused or just misunderstood?  Any thoughts?  Send to: Ideas at nano-blog.com (The email address has been written with “at” in place of the “at symbol” to avoid spam filling the mail box.)

Acknowledgements:

Special thanks to Deb, Evelyn, and Harold for critical review and suggestions to improve this blog.

References:

  1. Carbon – http://www.rsc.org/periodic-table/element/6/carbon
  2. CNT – https://web.stanford.edu/group/cpima/education/nanotube_lesson.pdf
  3. CNT – https://www.nature.com/articles/ncomms5892
  4. Isotopes – https://en.wikipedia.org/wiki/Isotope
  5. Uranium – http://www.world-nuclear.org/information-library/nuclear-fuel-cycle/introduction/what-is-uranium-how-does-it-work.aspx
  6. Lithium – https://en.wikipedia.org/wiki/Isotopes_of_lithium
  7. Lithium – http://www.rsc.org/periodic-table/element/3/lithium
  8. Atoms – http://gizmodo.com/5875674/ibm-figures-out-how-many-atoms-it-takes-to-hold-a-bit-hint-its-12
  9. # of atoms – https://socratic.org/questions/a-pure-copper-sphere-has-a-radius-0-929-in-how-many-copper-atoms-does-it-contain

Posted in Uncategorized

2-dimentional material and other nano properties

Material: Two-dimensional materials seem to have a staying power in various technical news magazines.  The US Department of Energy released a report on efforts involving the Lawrence Berkeley National Laboratory work on molybdenum disulfide (MoS2). [Ref. 1]  Granted that the quantity production of this or any other 2-D material is difficult.   Research is ongoing to determine the properties of the various 2-D materials with the knowledge that when some material exhibits characteristics that are very important for device development, someone will develop a means of producing the material in sufficient quantity.  The researchers measured the bandgap of the material and found it to be 30% higher than theoretically predicted.  The fact that they are able to develop a means of accurately measuring the bandgap holds promise for evaluating other materials.  The researchers also found a relation between electron density and the bandgap.  There findings indicate a possible application in sensors where optical or electrical effects can produce the complimentary effect.

There was a caution that the molybdenum disulfide is extremely sensitive to its local environment.  This is not different from the impact of exposure of graphene to the atmosphere.  Considering that 2-D materials are one atom think, it means that 100% of the atoms are on the surface and able to react with the environment.  Contamination by external factors ends up reducing the properties of the basic material.  This fact makes some of the planned applications challenging.

Nano-scale motion:  Researchers at CalTech have made measurements of spherical gold nanoparticles moving in in water using a technique called liquid-cell 4D electron microscopy. [Ref. 2]  A key element in observing the motion was the application of femtosecond laser pulses.  Their efforts were of a liquid, a few hundreds of nanometers thick, captured between parallel plates.  The particles appear to be driven by steam nanobubbles near the particles surface.  This provides the initial action and then the resultant motion is a random motion as particles bounce of other particles.  The hope is that the knowledge gained from this work will provide knowledge to develop both micro and nano actuated transport mechanisms.

Light induced crystal shape changes: Work by scientists at KAUST demonstrated photostriction in Perovskite crystals.  In particular the researchers focused on MAPbBr3.  When illuminated by light, the material’s photostriction changes the internal strain in the material.  Their technique which employs in-situ Raman spectroscopy with confocal microscopy was able to measure intrinsic photoinduced lattice deformation.  The researchers demonstrated that only a part of the change was due to the photovoltaic effect.  They theorize that the generation of positive and negative charges due to the light polarizes the material which creates a change in the material structure.  The researchers think that understanding the mechanisms behind the structural changes could provide a significant benefit in developing greater efficiency solar cells.  Other possible applications include optoelectronic devices.

Thoughts: The tools for working in the nano-realm are improving.  The discovery of different properties that could be applied to new devices are increasing.  The “nano” revolution has been around for a number of years.  There are application of nanomaterials being applied to commercial products for increased performance.  Medicine is using the nano-sized carriers to combat diseases.  But, are we missing something basic?  Are we really using the properties of the nano-scale?  More on this line of thinking later.

Reference:

  1. http://www.newswise.com/doescience/?article_id=680155&returnurl=aHR0cHM6Ly93d3cubmV3c3dpc2UuY29tL2FydGljbGVzL2xpc3Q=
  2. http://nanotechweb.org/cws/article/tech/69765
  3. https://phys.org/news/2017-08-photosensitive-perovskites-exposed.html

Posted in Uncategorized

Nano-Safety Educational Efforts

There is a book being published in late 2017 by De Gruyter called “Nano-Safety, Wheat We Need to Know to Protect Workers” [Ref. 1].  (Full disclosure, I am one of the editors for the tome and co-author of two of the chapters.)   I am covering this topic because we, the editors, are at the completion of a process that has taken well over two years.  When you see a technical book, you may think “I could have done that.  It would be easy.”  So, I want to tell a story.  The concept for the book came after more than six years in trying to establish the importance on nanotechnology safety and the need for training people in the proper handling of nanomaterials.  This builds on my blog of December 13, 2014 that talks about the efforts to get contracts to develop the needed procedures.

The material in the book is partially based on the evaluation of various feedback received from both students and professional reviewers of the two courses that were created.  Once we had the feedback and the emphasis a that there was a need to develop a book that addresses nanotechnology safety (Nano-Safety), there was a need to find a publisher.  This was not an easy process.  One needs a technical publisher that is interested in the topic.  Everyone wants nanotechnology this or nanotechnology that, but nanotechnology safety?  That was another story.

Of course, the publisher needs to see the outline of the chapters and the potential authors of chapters.  The outline will go through  a number of revisions, partially because the publisher is looking for some specific ideas,  There is also a need to have the authors still be working in the specific topic identified.  This normally requires some modifications to content as authors change.

Once there is a contract in place, the real work starts.  Authors are informed to move forward and given a deadline of a year or so.  Not hearing anything from the authors greater 9 months or more is usually a sign of some issues developing.  Sometimes, a new author must be found.  People get sick or change jobs or something else.  So there is a need for the new author to meet a much tighter time schedule.  Once the draft chapters are in hand, each must be reviewed by three or more people competent in the field.  This is always a problem in emerging fields.

Having the reviews of the chapters in hand, each author must be contacted a provided the comments from the reviewers, who are anonymous.  Then the authors revise their manuscripts and submit them to the editors.  Depending on the severity of the comments, the documents may need to go through another review process.  If the comments are minor, the editors may check to ensure all the concerns were addressed properly.  Then the manuscripts are sent to the publisher, who also reviews them.  There may be interaction with editors on minor points.

Next, the galley proofs arrive and each author needs to address the minor issues that are identified.  Finally, the book is ready to move to the publisher’s printing schedule.  Considering everything that takes place, two years is not a long time.

Reference 1: Nano-Safety, Dominick Fazarro, Walt Trybula, Jitendra Tate, Craig Hanks,  De Gruyter Publisher 2017, ISBN 978-3-11-037375-2

Posted in Uncategorized

Two-dimensional materials and moving them into production

There has been more work reporter on 2-D materials, also called atomic level materials.  Typically, this term refers to a sheet of material that is only atom thick with the other two dimensions that extend as far as can be produced, which normally is not to great.  The first material that was referred to this way was graphene.  There are interesting properties. Graphene provides strength as well as electrical properties.  There have been claims that 2-D materials will lead to improved performance in solar cells, new electronics based on 2-D transistors, various filtering mechanisms, and even a novel type of semiconductor.

This development has been ongoing for over 10 years.  Like carbon nanotubes, the applications seem almost infinite, but the actual release of products based on the materials is very slowly moving forward.  You could ask why.  For the researcher, the development of novel ideas is what get papers published or patents issued.  Moving a product into production is a totally different story.  The concerns include customer acceptance, the ability to have a sufficient quantity of quality material required for the products, a distribution channel, and a guaranty of a quality product.

As the iPhone is approaching its 110 anniversary, it is hard to remember what it was like before the iPhone.  One of the more advanced phones was produced by Blackberry.  It had an actual keyboard, albeit small, that could be used for entering data.  The competition had the 12 keypad that required multiple taps to go from a number, to an underlying capital letter and finally to a small letter.  Since each key had at least three letters along with a number, it was a task.  The keyboards took approximately ½ of the phone face.

Apple took a huge bet to create a phone with a larger screen and incorporated the keyboard onto the screen through the touch display.  That with the addition of additional functions for the phone enabled the sales to skyrocket.  Other companies had to change their models to compete and stay viable.  Apple had gambled and it paid off.  What if it failed?  Apple would not be the household word it is today and it might not have survived.

Manufacturing of a product also has risks.  Someone comes along and has a process or material that will take 10% off the cost of the final product, yet companies are reluctant to try it.  Why?  A 10% or 15% cost reduction sounds like something that should be done immediately.  The issue is that no new material or new process is introduced and immediately starts providing dividends.  Typically, when a new process is introduced, there is a slow done in production due to working our process bugs or material issues.  Yields normally decline until the bugs are worked out.  All of this is lost sales/revenue.  If the modification can not be implemented to the level desired, there is more lost product with the corresponding losses.  Sometimes even a 50% improvement might not be sufficient to try introducing a novel change.

So how does this impact 2-D materials?  One of the greatest problems with 2-D materials is getting sufficient quantities of the quality product needed.  I know of a significant stride in measuring pressure that employed a 2-D material.  This effort crashed when there was an attempt to make more than a few laboratory samples.  The quality, quantity, and size required were not capable of being obtained.

2-D materials need to progress further in development so the quality of the materials can  be relied on.  After that is achieved, the size and volume of material needs to be developed.  All of this happens after the applications are first proven in the lab.  This takes time, although all would like it to happen faster.  It is coming but it is coming slowly.   Without any question, more effort is needed to address the manufacturing challenges.

Posted in Uncategorized