Faking Science

A portion of this topic was covered in the April 2018 blog, but enough additional material has surfaced that this information needs to be covered again.  The reproducibility issue covered in April is one thing.  The ability to slant results through the use of statistics [Ref. #1] has been known for a long time.  The omission of data points that don’t support the conclusions is another method.  It is always possible to come up with an argument that supports changing of results, but is that proper?

The National Institutes of Health Clinical Center Department of Bioethics has a brochure [Ref.2], which has 7 key steps for ethical development of research.  Two of these are Scientific Validity and Independent Review.  The impact on society is a key consideration for the overall research.  Medical research is directed at saving lives and improving the quality of life for impacted people.  Consider the following situations:

Professor Brian Wansink of Cornel University was considered a leading researcher in eating behavior.  He resigned earlier this year due to findings that he misreported research data, employed problematic statistical techniques, did not properly document and retain research results.  The main contention of his statistical work was that he employed one technique (p-hacking) that involves running statistical analyses until statistically significant results are obtained and the other (HARKing) is hypothesizing after the results are known. [Ref. 3]

The head line reads: “Harvard Call for Retraction of Dozens of Studies by Noted Cardiac Researcher.” [Ref.. 4]  Dr. Piero Anversa published results suggesting that damaged heart muscle could be regenerated with stem cells.  Although his work could not be replicated by independent researchers. There were numerous awards for clinical trials.  The questioning of his work resulted in more than 30 published papers that were in question.  The result was that an entire field of study developed by Dr. Anversa is called into question.  His institution, Brigham and Women’s Hospital – A Harvard medical School paid #10 million for research fraud. [Ref. 5]

Duke University had a researcher in the lab of a prominent pulmonary scientist arrested on charges of embezzlement.  [Ref. 6]  The investigation turned up some unusual things.  The end result was the 15 or the scientist’s papers were redacted.  It was claimed that the research in these papers had enabled Duke University to obtain over $200 million in grants.

Unfortunately, these are not isolated cases.  The onlineuniversities web site provides more details of the “10 Greatest Case of Fraud in university Research.” [Ref. 7]  It is worth a quick skim to see the areas of research and the impact on people.  Remember these sentences in the second paragraph?  The impact on society is a key consideration for the overall research.  Medical research is directed at saving lives and improving the quality of life for impacted people.

Other areas of scientific research are not immune from “interesting” shenanigans.  A recent article in the Washington Post [Ref. 8] contains a claim by a scientist that the oceanographic study recently released contains errors that increased the possibility of the project results.  The response was that the scientists were work quickly to create the report and may have included inadvertent mistakes.  Understandable.  However, the National Oceanic and Atmospheric Administration (NOAA) has refused to provide the research data, notes, etc., to Congress.  [Ref. 9] As a federal agency, NOAA receives its budget from Congress and Congress has oversight responsibility.

The last reference has a number of interesting observations. [Ref, 10]  A key point is that developing crises results in the need to investigate and understand the cause and impact of the developing crises.  There is a reference from Al Gore’s book quoting Upton Sinclair.  It is difficult to get a man to understand something when his salary depends upon his not understanding it.”

If we can’t believe that scientific research is driven by facts, hypothesis development and testing, and then valid conclusion based on reproducible experiments, how can we trust the actions presented as needed by the results?


  1. How to Lie with Statistics, Darrell Huff, ISBN-13: 978-0393310726, ISBN-10: 0393310728
  2. https://bioethics.nih.gov/education/FNIH_BioethicsBrochure_WEB.PDF
  3. https://www.wsj.com/articles/a-cornell-scientists-downfall-1537915735
  4. https://www.nytimes.com/2018/10/15/health/piero-anversa-fraud-retractions.html
  5. https://www.thecrimson.com/article/2017/4/28/brigham-pays-fine-fraud-allegations/
  6. http://www.sciencemag.org/news/2016/09/whistleblower-sues-duke-claims-doctored-data-helped-win-200-million-grants
  7. https://www.onlineuniversities.com/blog/2012/02/the-10-greatest-cases-of-fraud-in-university-research/
  8. Scientists acknowledge errors in Study of Oceans, Austin American Statesman, Thursday, November 15, 2018, page A8
  9. http://intelligentuspolitics.com/noaa-refuses-to-provide-climate-research/
  10. Onward, climate soldiers, https://www.washingtontimes.com/news/2018/nov/13/science-loses-when-a-system-of-penalties-and-rewar/


Posted in Uncategorized

Nanotechnology and Electronics

The competition for nanotechnology applications in electronic circuitry is meeting manufacturing volume challenges, reliability, and cost.  Until the manufacturing volume can be demonstrated, the reliability can not be evaluated.  Cost will be a function of developing means of high-volume manufacturing.  This is still some time in the future.  If the challenges are engineering challenges, the solutions will be found with enough industry effort.  There are three areas where nanotechnology appears to be providing some indication of a promising future: flexible nanomaterial based circuitry, sensors, and graphene-based circuitry.

Flexible Nanowires:

Researchers from the University of Glasgow [Ref. #1] have developed a process that can print nanowires using silicon and zinc-oxide materials with average diameters of 115nm and spacing of 165nm.  This permits the production of flexible electronic circuits.  An advantage is that the circuitry can be produced over a large area.  The potential for this approach is to provide the underlying circuitry for form fitting and flexible circuits.  One example that immediately comes to mind is the creation of custom circuits to work on prosthetics.


One area of sensor development if the application of carbon nanotubes imbedded within fibers.  Nerve-like conductive composites are produced using an elector-deposition of functionalized carbon nanotubes [Ref. #2].  Their work has been embedded within a number of traditional fabrics.  The coatings range from n205nm to 750nm thick.  The initial work was employed to measure the pressure on the various parts of subject feet.  Their claim is that the application can produce superior results in a variety of medical situations where the force pattern is important in developing the issue or injury.

Another sensor application is the development of an electronic “nose”.  There has been numerous research papers in this topic.  One example [Fig. #3] describes their efforts to enable a device employing nanosensors to analyze various chemicals that are exposed to the device.  The device examines molecules in vapor.  The device is small (handheld) and is accurate to with 20% of a gas chromatograph, which costs significantly more and takes longer.  Applications that required expensive and/or time consuming measurements can be replaced by portable and inexpensive devices that provide rapid analysis.

Graphene based Circuitry:

One of the issues of mass production is to be able to achieve the volumes at an efficient cost, which implies that the circuitry must be produced cheaply and rapidly.  One of the issues is that any approach must compete with semiconductor5 manufacturing.  Through the use of lithography for patterning, billions of transistors can be produced very rapidly.  There has been a change in the previously continual progress of reducing cost as the dimensions of the devices are reduced.  This recently changed in that the challenges in making the nearly single digit nanometer devices has become more expensive.  The consequence of this is that the cost per transistor has stopped shrinking.

The use of 2D materials has been promising, but high-volume production is not been proven.  Work at imec [Re. #4] has focused on building 2-D devices and layering them to produce the desired properties.  The issues with the process are that it is a multi-step involving a number of different materials.  The projection is that material availability in volume is 10 years or more in the future.  The hidden issue is that defect-free 2-D materials are not producible in dimensions that are required for volume production.


We are seeing the development of potential materials that can be employed to solve issues are not currently possible.  We are still in the materials development phase.  It will be a while before there is truly volume production of nanoscale electronic systems.

  1. https://semiengineering.com/manufacturing-bits-sept-18/
  2. https://www.nanowerk.com/nanotechnology-news2/newsid=50902.php
  3. https://www.sustainablebrands.com/news_and_views/chemistry_materials/sheila_shayon/electronic_nose_nanotechnology_we_can_analyze_safet
  4. https://semiengineering.com/can-graphene-be-mass-manufactured/

Posted in Uncategorized

Augmented Reality

Previously, I have mentioned how Augmented Reality (AR) could be employed to evaluate nanomaterial properties through being able to switch from 3-D images containing three different parameters to a different view with one or more of the parameters changed.  The real question is how might AR be beneficial to a large number of people.

One of the challenges of business is the necessity for meetings.  Being able to meet face-to-face enables a strong interaction among the people present.  Conference calls are okay but lack the ability to talk about figures and drawings.  Yes, it is possible to send out the information and hopefully have people move to the page that is being discussed.

WebEx and similar approaches are beneficial by enabling the presenter to move through a presentation with everyone observing the same specific item being discussed.  Yes, it is possible to add video so that individual can appear on a screen.  The issue with multiple little rectangular video boxes on the screen is that once a small number of participants is exceeded it is difficult to focus on the presentation.

I have experienced video conferences where the multiple video conferencing rooms are set up.  When this is done correctly, it is almost like siting across a table from the participants.  The issue with this approach is the requirement for dedicated facilities to enable the communications.

What is it about face-to-face meetings that make them better that the alternatives we have today?  Face-to-face meetings provide for the ability to observe how participants react during a meeting.  Facial changes provide a significant amount of information on how an individual is reacting to the presentation.  This is not possible under current virtual meeting alternatives – today.

Augmented Reality employs computer generated graphics into a real-world environment.  In most of these instances, the generated image is three dimensional.  This still requires processing power to achieve the desired images.

What if?  What if it were possible to have a camera taking 3-D images of a meeting participant and sending the data to a computer to create an augmented image of that person?  What if the resolution of that image were sufficient to observe facial expressions (on a 3-D image of the person)?  What if there were sufficient transmission capabilities to send and receive multiple images simultaneously?  What if there were sufficient processing power to resolve all the images being sent?

One can envision a future program that can insert meeting participants into empty chairs around a meeting table.  (Obviously, everyone would need to be wearing an AR headset.)  Physically present people would take a seat at the table.  The computer would provide AR images of the remote people and place them in empty seats.  Hands could be raised to ask questions.  There would be no difference between the physically present and the virtually present people.  Would this approach be as good as an in-person meeting?  Not quite.  The ability to have side discussions would still be available.  With sufficient resolution, the reactions of the attendees would be recognizable.  What would be missing is the side conversations prior to and after the meeting.

Is this possible today?  Not yet.  There are a number of issues and the technology is still in the early stage.  If one considers the data requirements.  If the image is 3-D, the data required is at least twice for a typical video transmission.  The resolution needs to be higher, which increase the data to be transmitted.  In some locations, the upload speed is more than a factor of 10 slower than download speed.  If there are 5 or 10 people in one location, the system will probably not perform well.  So, higher upload speeds will be required.  The computational power required is significant.  If the people are rendered (virtually), the refresh rate needs to be many times a second.  Again, higher data transmission rates required.  The computer that creates the virtual image must be refreshing the image many times a second.  This needs to be done for each of the participants.  Does this mean multiple computers with multiple cores?  Probably.  My guess is that a system with the capability described, would have multiple computers.  Each computer could handle set number of virtual people.  If that number was exceeded, a second computer would be brought on line.  The possibilities are endless.  We need for a company to come forward with a augmented reality meeting product.  Consider that program to be something like WebEx on super-steroids.

Posted in Uncategorized

Modeling and Nanotechnology

Modeling is the development of a mathematical representation of an actual or proposed set (group) of interactions that can be employed to predict the functioning of the set (group) under possible conditions.  Interest in modeling has grown since the late 1970 and early 1980s coincident with the development of more and more powerful computers.  The size of models has grown and the additional sophistication of the evaluation has increased.

What does this have to do with nanotechnology?  It has been demonstrated that physical material properties change as the size of the material decreases into the low double-digit nanometers.  The application of gold nanoparticles to produce the color of red in stained glass windows is an example of usage that is hundred of years old.  The effect of sliver nanoparticles on bacteria in another example.  Is it possible to develop a model that will predict the behavior of materials for any size or shape?  The answer is: “Yes, BUT”.

One instance that raises a question about modeling goes back to the early 1960s.  Professor Edward Lorenz (MIT) employed a computer simulation of weather patterns.  Supposedly, he left to get a cup of coffee and when he returned, he was very surprised.  He was rerunning a simulation he had previously made.  Except, this time he rounded 0.506127 to 0.506.  This small change in value changed the entire pattern of the projected two months of weather results. [Ref. #1]  This result has become know as the butterfly effect, which is used to refer to a very tiny occurrence can change the course of what follows.  This terminology as used in chaos theory, represents the dependence sensitivity of the initial modeling conditions that can produce significant changes in the resultant projections. [Ref. #2]

In a quote I attribute to Professor Robert Shannon of Texas A&M University, he said: “All models are wrong!  Some are useful.”  Once the natural occurring probability impacts the occurrence of the data employed, the results are uncertain.  The interesting thought is that complex models run on 16-bit computers could have completely different results from the same model run on a 64-bit computer.  In addition to this difference in precision, the initial starting conditions are important.  In many models, the initial conditions are left empty or zero and the model is run to take remove the “initialization” bias.  Obviously, models, like weather forecasting, need initialization data.  In the weather example, there are a number of tropical storm forecasting models that are compared.  Each of the model’s projection is actually based on a number of runs of that data to determine an average or best fit.  In comparisons, the European model, which has more sensors than the US model, tends to be a bit more accurate.  Trying to predict smaller effects is more difficult because the minor change in variables can cause greater effects when trying to restrict the weather impact to smaller regions.

So, the question is what kind of results can be anticipated with modeling on nanotechnology.  One example is the recently identified Schwarzite carbon structure. [Ref. #3]    Material with these properties was predicted as early as the 1880s, but no one was able to create it to validate the theoretical (modeling) results.  Now that it has been created, the predicted properties can be tested and evaluated.  Once the material is in hand, then actual testing can be done.  Predictions can point out directions to follow, but do not guarantee that the material will have the specific, predicted properties.

These was recent article [Ref. #4] that implies computer simulation will be used to develop the laws of nature and end much work being done in theoretical physics.  While there might be benefits gained and some people are indicating that artificial intelligence (AI) will provide interesting breakthroughs, these “discoveries” will still need to be proven.

One thing that modeling can not do is to find surprises.  University of Wisconsin physicists constructed a 2-D form of tungsten-ditelluride [Ref. #5] that has unanticipated properties, including “spontaneous electrical polarization” from combining two mono-layers of the material.  Until models can be constructed that contain all the variables and correct relationships among particles, models will be “wrong” but useful.



  1. https://www.technologyreview.com/s/422809/when-the-butterfly-effect-took-flight/
  2. https://dzone.com/articles/what-role-does-the-butterfly-effect-play-in-tech
  3. https://www.graphene-info.com/schwarzite-carbon-structures-identified? utm_source=feedburner&utm_medium=email&utm_campaign=Feed%3A+graphene-info+%28Graphene-info+%3A+Graphene+news+and+resources%29
  4. https://www.quantamagazine.org/the-end-of-theoretical-physics-as-we-know-it-20180827/
  5. https://electroiq.com/2018/08/for‐uw‐physicists‐the‐2‐d‐form‐of‐tungsten‐ditelluride‐is‐full‐of‐surprises/

Posted in Uncategorized

Semiconductor Nanotechnology

What is the state of nanotechnology in creating semiconductors?  As the size of the individual semiconductor components shrink, the current material employed starts to create concerns.  Projections have been made for incorporating nanomaterials, like conductive carbon nanotubes, for interconnects between levels of the semiconductor circuits.

Note to readers about semiconductor manufacturing.  There are many levels (layers) in a single semiconductor.  The transistors are the basic individual device and consists of a combination of materials that permit a specific charge to change its state to either permit electron flow or not.  To interconnect one transistor to another requires some conductive material.  With billions of transistors on a single integrated circuit, there are many levels on interconnects required.  To provide the conductive path for the current required connecting one level to another level, possibly tens of layers removed.  [Ref. #1]  Currently materials, like tungsten, are employed.

The issue with employing carbon nanotubes is that conducting nanotubes would be required to fill each of the desired connects.  Current carbon nanotube manufacturing technology can not produce only conducting nanotubes with a yield of parts per billion.  Even if the desired yield could be produced, how would the nanotube be placed in the correct location?

There is work being done on using graphene with germanium [Ref. #2].  It appears to be able to form smooth ribbons with curved surfaces.

Directed self-assembly (DSA) of interconnects is being developed.  The critical issues are both placement and defect identification.  There should be some development along these lines in the next two years.

Another approach is to employ semiconducting nanowires.  [Ref. #3]  These structure provide unique characteristics of the materials.  One of the challenges in using these structures is the requirement for placement.  Based on some of the work currently being done, it is possible to have these structures placed on a regular pattern or grid.  Employing the interconnecting levels as currently being done could provide a possible means to go forward.

Currently, the design of transistors is moving to have more of a vertical component.  Employing vertical nanowires (nanotubes) could be a further step in this direction.  It would be possible to mass produce the transistors as done today, which assumes the yield of the correct type will be possible.  One large problem is still placement of the devices.  I have seen 100nm vertical nanowire grids created.  The possibility exists of getting that values down to 10nm.  Even at 10nm, the long term needs may not be satisfied.

Conclusions:  Nanomaterials are being investigated for applications in semiconductors.  The work that is being done has not yet provided solutions for the high volume production required for the existing state-of-the-are devices.

The above commentary is focused on semiconductor circuitry.  There are efforts addressing the solar energy arena that are incorporating nanotechnology and improving the performance of the photovoltaic cells.  There is additional efforts in quantum computing that are in their infancy.  It will be interesting to observe what direction these efforts and how they can be commercialized.


  1. https://electroiq.com/2011/03/nanotechnology-for-semiconductors/
  2. http://www.advancedmp.com/nanotechnology-semiconductors/
  3. http://www.annexpublishers.co/articles/JMSN/5202-The-Role-of-Nanotechnology-in-Semiconductor-Industry-Review-Article.pdf

Posted in Uncategorized

Technology Roadmaps

The term “roadmap” implies something (a document, a pictorial representation, etc.) that provides the guidance to get from one point to another.  Due to the lack of a direction in developing large scale nanotechnology applications (author’s opinion), there is a large scattering of uncoordinated efforts to develop various nanotechnology materials and applications.

The Semiconductor Industry has been employing technology roadmaps since at least the early 1990s.  This guiding document had been called the “International Technology Roadmap for Semiconductors” (ITRS) [Ref. 1].  It has become the “International Roadmap for Devices and Systems” (IRDS) [Ref. 2], This name change recognizes the fact that the challenges of continual shrinkage bring other aspects of design and manufacturing into the overall equation.

Why is a roadmap needed?  Using a simplified, hypothetical example, consider building a novel type of automotive engine.  In this case, consider a hydrogen fueling vehicle.  While there are some companies that are already experimenting with this type of fuel, there is no widespread application.  Among the initial problems that had to be solved was the composition and storage of the fuel.  Liquid hydrogen must be cooled to less than 33K to be in a liquid state.  That is both difficult and provides a potential explosive situation.    The next step is the development of some intermediary type of composition that will not provide a possible public danger and will not evaporate when the vehicle is stored.  So, the hydrogen is not stored as cooled liquid but is developed into a fuel cell, which is safe.  The advantage of the hydrogen fueled vehicle is that the resultant of the combustion is water.  Once the fuel composition is developed, there is a need to create a means of obtaining the fuel.   There have been a very small number of “filling” stations created for refueling the hydrogen powered vehicles.  That is the ISSUE!  When developing a vehicle that needs to be refueled, there must be places to refuel it.  Without a readily accessible source of fuel, a large number of people will not purchase the vehicle.

Back to semiconductors.  In the early 1990s, one company developed a 248nm lithograph tool.  (The 248nm is the source wavelength for the tool.)  The shorter the wavelength, the smaller the features that can be created.  Moore’s Law is a reflection of the fact that smaller features result in increased capabilities in the same surface area.  The situation that emphasized the need to the technology roadmap was the 248nm lithography tool.  The tool performed well and would permit the increase in density of the semiconductors.  The introduction of the first of the 248nm tools into production did not happen!  More is needed in addition to an improved tool.  It the simplest form, the tool, the means of imaging the patterns (this included the resist and the equipment for applying the resist), the development of the patterns (images), and a means of inspecting the results.  What was initially lacking was a resist that could withstand the rigors of volume production.  Basically, this is the oft repeated scenario of why a battle was lost due to the lack of a nail for the attachment of a horseshoe to one horses hoof.  Every piece of the process must be ready to be inserted into production before he entire system can work.  The other issue is when a critical part can not be supplied in sufficient quantities.  The Wall Street Journal points this out in an article on a shortage of one component for smartphones and other electronics.  [Ref.3]  So there must be a completely developed process and the manufacturing volume must be sufficient to satisfy the consumer needs.

So, where does a semiconductor nanotechnology application stand today?  The next blog will address this question.


  1. http://www.itrs2.net/itrs-reports.html
  2. https://irds.ieee.org/roadmap-2017
  3. https://www.wsj.com/articles/cant-find-a-playstation-4-blame-it-on-a-part-barely-bigger-than-a-speck-1530282882

Posted in Uncategorized

Structured Materials

There is more reporting of “structured” materials.  The terminology employed to define “structured” overlaps with “metamaterials”.   First metamaterials are typically an engineered assembly of various elements that are constructed in specific atomic structure arrangements to create non-natural occurring combinations and structures that provide unique properties.  The properties of the structure are created in such a manner as to provide interaction with incoming waves, which can be acoustic or electromagnetic.

The Institute of Physics has an article on the “Harry Potter invisibility cloak.”  [Ref. #1]  The explanation is given of how metamaterials can bend electromagnetic radiation (in this case light) around an object. There is a video in Reference #2 that demonstrates the effect of bending light waves.  This is an actual bending of the light rays via the material properties.  There are other examples on the internet if you search for them.  One issue is that the material employed and the structure design are limited to specific frequencies (wavelengths).

Acoustic metamaterials are materials designed to manipulate acoustic waves (sound waves are in this category).  Reference #3 is from Duke University researchers and was among the first if not the first demonstration of cloaking an object from sound.  The cloak and the material covered appear to not exist.  The structure in this case was accomplished by employing a standard material, plastic, but developing the shape in such a way that the material structure appears to be completely flat.  The compensation for the difference in distance are developed by the form of the structure.

What we are learning in that some of the anticipations about the arrangements of the materials is not something that is always expected.  Reference #4 is an article about the atomic structure of an ultrasound material.    Lead magnesium niobite, which is employed in ultrasound applications, was found to have the atoms shift along a gradient. (More details are available in reference #4 to explore the actual paper which is refenced.)

Structured materials previously had been considered the development of materials based on their mechanical properties and not on their electrical, acoustic, optical, or chemical properties.  These materials could range from the sub-micron range to centimeters.  As work in this area continues to smaller and smaller dimensions, other material properties are also changing.  This is opening up a new world of applications.

There is always new information being published on the internet.  A good source of metamaterial information is the Phys Org site [Ref. #5].  This field is even more surprising than the development of graphene.  Looking at the results of finding new applications for materials with unanticipated properties is always thought provoking.  And, if one considers that we have not even approached the application of singular isotopes of materials, it is very difficult to predict what new material properties will be found.  The new tools being developed permit a greater understanding of the mechanisms behind the properties.  Learning about these will permit applications undreamed of, except maybe for science fiction.


  1. http://www.iop.org/resources/topic/archive/metamaterials/
  2. https://www.cnn.com/2016/07/20/health/invisibility-cloaks-research/index.html
  3. https://pratt.duke.edu/about/news/acoustic-cloaking-device-hides-objects-sound
  4. https://www.mse.ncsu.edu/news/structure-of-ultrasound-materials-unexpected
  5. https://phys.org/tags/metamaterials/

Posted in Uncategorized

Current Challenges in Science – Scientific Rigor

The accuracy of “scientific” research has recently reappeared in the press.  A previous blog (February 25, 2015) addressed the implication of erroneous published medical research results.  An earlier blog (July 17, 2014) discussed falsifying peer review processes.  The concern is that in today’s instant communication world, information can be widely spread without any checking of the truth.  Once something is repeated a few hundred thousand times or millions of times, the concept becomes “fact”.

There is a recent report [Ref. 1] from the National Association of Scholars that considers the issue of the use and abuse of statistics in the sciences.  While the paper is lengthy, 72 pages, there is significant information about various improper considerations in reporting research results.  The writing is directed at a general audience and does not incorporate statistical equations of detailed mathematics.

In reality, the misleading information can be caused by more than abuse of statistics.  Granted that abusing statistics is an important part of creating a narrative that the researchers are promoting.  The announcement of significant results without providing information on the size of the experimental data set provides an easy means of misdirecting the readers.  The one major medical “mis-Study” that used three related people to demonstrate a desired outcome comes to mind quickly.  [Reference intentionally not listed to encourage investigation by the reader.]

Henry Bauer [Ref. 2] points out in his comments on the above-mentioned publication that the real concern is the role of science in society as the source of dissemination reliable information.  This information is employed to make decisions that impact many regulations that guide society.  I think his comment: “A further complication is that fundamental changes have taken place in science during the era of modern science and society as a whole is ignorant of that.”

There have been various reports in the scientific news that some efforts at reproducing research results have not been very successful.  One report indicated that of roughly 100 experiments that were explained in sufficient detail to be reproduced, had a less than 20% success in repeating the results.  Some of these efforts and failures were conducted by the original researchers!

In a synopsis of the previously mentioned article [Ref. 3], it is pointed out that a 2012 study of fifty-seven “landmark” reports in hematology and oncology, the findings were not able to be verified in forty-seven of them.  And, medical decisions and policy are made on the basis of the studies.

Unfortunately, there seems to be a reluctance to share data so research can be reproduced and the results evaluated.  Currently, there is a strong push-back by governmental agencies to keep data confidential in research that is being employed to set regulations.

The question that arises is “How can we trust results being employed to make governmental decisions and regulations if no one can examine the data?”  It will be both interesting to observe the results and critical to the belief in the scientific work to see if this credibility issue is addressed.


  1. Randall, David and Christopher Welser. “THE IRREPRODUCIBILITY CRISIS OF MODERN SCIENCE”, April 2018, National Association of Scholars,  https://www.nas.org/images/documents/NAS_irreproducibilityReport.pdf
  2. https://www.nas.org/articles/comment_on_the_irreproducibility_crisis_henry_h._bauer
  3. https://nypost.com/2018/04/28/calling-out-the-world-of-junk-science/

Posted in Uncategorized

Changes in Nanotechnology Perspectives

There are announcements about new findings or new concepts in nanotechnology that may not appear a “big” changes.  Typically, the definition of nanomaterials is: For a material to be called nanomaterial, its size must be smaller than 100 nm in at least one of the dimensions, or a nanomaterial should consist for 50 % or more of particles having a size between 1 nm-100 nm.  A comment in the latest imec magazine issue [Ref. 1] adds a little more clarity: “What makes these nanoparticles special, is that their properties cannot simply be derived from their bulk counterparts due to quantum physics effects. There is a direct effect of size on several physical and chemical properties.

“Why the hazardous properties can be different is because the charge density on the surface is different. Since nanoparticles are smaller than bigger particles, the surface is more curved and the charge density is larger. Additionally, their free energy is larger, which can change their catalytic activity. Finally, the number of atoms touching the skin – the first layer of contact – as a percentage is larger than with larger particles. Some of the nanomaterial properties change in a predictable way, others in a threshold way.” [Ref. 2] But article also states: “What is important for us is the size threshold, so that similar materials can be treated the same way.”

As long time readers of this blog may remember, size makes a large difference in many different ways.  The transition of aluminum nanoparticles as a size boundary is crossed.  The ability of gold nanoparticles to change colors based on size.  The ability of silver to kill bacteria as size decreases past a threshold value.    The effects can be different even for nanomaterials in the same periodic group,

The above quotes are from an article published by imec in their monthly magazine describing an effort to create understanding of nanotechnology safety in Europe among their semiconductor manufacturing workers.  The US has funded two distinct efforts on nanotechnology safety to provide an education source for both workers and students.  The educational aspects of nanotechnology safety are addressed in an Introductory Course and an Advanced Course funded by NSF at the Texas State University. [Ref. 3]  This effort has produced a textbook on nanotechnology safety. [Ref. 4]  OSHA funded an earlier effort at Rice University that developed an eight-hour training course for workers in various industries. [Ref. 5]  (Disclaimer: the author of the blog was involved in the three items referenced immediately above.)

There is new modeling work that describes the ability to create new materials that have unusual properties.  The issue is that developing a model and manufacturing the modeled structure are not straightforward.  Next month, this blog is planning covering the latest information on the modeling efforts.


  1. https://www.imec-int.com/en/imec-magazine/imec-magazine-april-2018/assessing-nanorisks-in-the-semiconductor-industry
  2. Quote by Dimiter Prodanov in Ref. 1.
  3. http://nsf-nue-nanotra.engineering.txstate.edu/curriculum.html
  4. Nano-Safety, Dominick Fazarro, Walt Trybula, Jitendra Tate, Craig Hanks, De Gruyter Publisher 2017, ISBN 978-3-11-037375-2
  5. “INTRODUCTION TO NANOMATERIALS AND OCCUPATIONAL HEALTH” available at https://www.osha.gov/dte/grant_materials/fy10/sh-21008-10/1-introduction.pptx


Posted in Uncategorized

Medical Nano

The development of nanotechnology in medicine is a longer-term process than nanotechnology in general.  The reason is that the application of any technology, device, or medicine to humans has an involved process with many steps that require a long time to demonstrate the ability to pass all the various regulations.

This month’s blog will look at developments in the last few years in three areas: 1) cancer treatment; 2) application impacting the heart; and, 3) the eye.

Cancer treatment has been a key research area since before 2000.  Initial work involved the use of attaching gold nanoparticles to certain types to viruses.  Caner cells are hungry and will devour various types of viruses.  By inserting gold nanoparticles into preferred viruses, the cancer cells would grab the them and try to consume them.  By illuminating with IR radiation, the site with a concentration of viruses with gold encapsulating inside then, the temperature of the virus and cancer cell can be raised to a high enough temperature to kill the cancer cells.  In a similar approach, encapsulating carbon nanotubes in the viruses, exposure to RF waves converts the radiation into heat efficiently and kills the cancer cells.  Where are we today in early 2018?

In January 2018, the Seoul National University Hospital indicated that it has developed a new magnetic nanoparticle that can be employed to improve the therapeutic benefits for the treatment of cancer.  The application of a magnetic field causes the nanoparticles to heat, which causes the cancer cells to be killed.  The claimed benefits are that the treatment can be focused to the specific cancer cells and basically leave the surround healthy cells without damage.  The claim in the latest work is that the magnesium nanoparticles developed are able to heat much faster that previously developed ones, which minimized the amount of energy to heat the nanoparticles (temperatures a high at 50C could be required) that is required.  An additional advantage is that the nanoparticles contain identical components as a previously FDA approved iron oxide nanoparticle. [Ref. 1]

As I was proofing this blog, I received an article from the R&D Magazine that provides a history of the direction of cancer treatment.  [Ref. 2] “Today there are about 20,000 papers written on this topic each year. A lot of researchers are starting to work in this area and we (the National Cancer Institute – NCI) are receiving large number of grant applications concerning the use of nanomaterials and nano-devices in novel cancer interventions. In last three years there has been two FDA approvals of new nanoparticle-based treatments and a multitude of clinical trials.”  I recommend reading it.

Addressing heart issues through the application of nanotechnology.  A nanoparticle developed by University of Michigan researchers could be the key to a targeted therapy for cardiac arrhythmia, which impacts 4 million Americans each year with a resulting 130,000 deaths.  Cardiac arrhythmia is a condition that causes the heart to beat erratically and can lead to heart attack and stroke.  Currently, the disease is treated with drugs, with the possibility of serious side effects.  Cardiac ablation is also employed, but the effect of a high powered laser damages surrounding cells.  The current work, not yet done on humans, kills the cells causing the problem without damaging the surrounding cells. [Ref. 3]

There is also work being done in cryogenically freezing and rewarming sections of heart tissue for the first time, in an advance that could pave the way for organs to be stored for months or years.  By infusing tissue with magnetic nanoparticles, frozen tissues can be heated in a excited magnetic field, which generates a rapid and uniform burst of heat.  Most work on warming tissue samples have run into problems with the tissues shattering or cracking. If this can be fully developed, the availability of transplanted organs becomes much better.  Donor organs start to die as sooon as the organ is cut off from the blood supply.  Possibly as much as 60% of the possible donor organs are discarded because of the four hour effective ice cooling limit of organs.  [Ref. 4]

Pioneering nanotechnology research has applications for cardiovascular diseases.  Samuel Wickline, MD, the founding director of the USF Health Heart Institute, has been harnessing nanotechnology for molecular imaging and targeted treatments. They have developed nanostructures that can carry drugs or exist as therapeutic agents themselves against various types of inflammatory diseases, including, cancer, cardiovascular disease, arthritis and even infectious diseases like HIV. [Ref 5]

Treatment of the eye has a number of needs.  Typically, less than 5% of the medicine dose applied as drops actually penetrates the eye – the majority of the dose will be washed off the cornea by tear fluid and lost.  Professor Vitaliy Khutoryanskiy team has developed novel nanoparticles that could attach to the cornea and resist the wash out effect for an extended period of time. If these nanoparticles are loaded with a drug, their longer attachment to the cornea will ensure more medicine penetrates the eye and improves drop treatment.  The research could also pave the way for new treatments of currently incurable eye-disorders such as Age-related Macular Degeneration (AMD) – the leading cause of visual impairment with around 500,000 sufferers in the UK.  While there is no cure for AMD, experts think its progression could be slowed by injections of medicines into the eye.  This new development could provide a more effective solution through the insertion of drug-loaded nanoparticles. [Ref. 6]

A coming generation of retinal implants that fit entirely inside the eye will use nanoscale electronic components to dramatically improve vision quality for the wearer, according to two research teams developing such devices.  Current retinal prostheses, such as Second Sight’s Argus II, restore only limited and fuzzy vision to individuals blinded by degenerative eye disease. Wearers can typically distinguish light from dark and make out shapes and outlines of objects, but not much more.  The Argus II contains an array of 60 electrodes, akin to 60 pixels, that are implanted behind the retina to stimulate the remaining healthy cells. The implant is connected to a camera, worn on the side of the head, that relays a video feed. [Ref. 7]

The above descriptions are only the tip of an iceberg.  There a much work being done around the world.  There have been an interesting series of articles on the development of various medical technologies that are joint university efforts with the teaming of both US and Chinese universities.


  1. http://www.koreabiomed.com/news/articleView.html?idxno=2283
  2. https://www.rdmag.com/article/2018/02/nanoparticle-based-cancer-treatment-look-its-origins-and-whats-next?et_cid=6275157&et_rid=658352741&location=top&et_cid=6275157&et_rid=658352741&linkid=content
  3. http://ns.umich.edu/new/releases/23249-nanotechnology-could-spur-new-heart-treatment
  4. https://www.theguardian.com/science/2017/mar/01/heart-tissue-cryogenics-breakthrough-gives-hope-for-transplant-patients
  5. https://hscweb3.hsc.usf.edu/blog/2017/01/20/pioneering-nanotechnology-research-applications-cardiovascular-diseases/
  6. https://www.nanowerk.com/nanotechnology-news/newsid=37649.php
  7. https://www.technologyreview.com/s/508041/vision-restoring-implants-that-fit-inside-the-eye/

Posted in Uncategorized