By admin

CETAS.Technology Intelligencer: Biomimetic Photographs and Memories

On this shortest week of the year, we slow down, cook lots of good food and burn the lights of memory and lived-holiday traditions.  Whether or not your holiday behavior is intelligent, well, that depends on how much rum is in your eggnogg…

Recall in last week’s update, we reviewed some new developments in an intelligent, supramolecular assembly language, including an equation to measure intelligence of a system.

If intelligent, adaptive behavior is a measure of maximizing future options and pointing the system toward the resources to get there, it would certainly help to have a gas gauge and some kind of a memory to make comparisons with. But what are we trying to do with all this?

Biomimetic Roots

The answer seems to have been given decades ago, by a couple of amazing brothers, Frank and Otto Schmitt. Both of them obtained PhD’s from Washington University, hung out with Nobel laureates like Compton and AV HIll, and became pioneers of biophysics and neuroscience.

The younger of the two, Otto, coined the term “Biomimetic,” and his 1937 thesis created the now famous Schmitt Trigger, which is found in nearly every microprocessor in use today.  The Schmitt Trigger is basically used for converting analog signals to digital, as a comparator, and a de-noiser.  Any kind of biological or electronic feedback system probably involves some variant of the Schmitt Trigger.

Otto was a skilled gadgeteer and engineer. His older brother Frank paved the way for him at Washington University in St Louis, where Otto essentially played hooky his entire senior year of high school, and never did receive his diploma. But based on his prowess in building equipment for Frank’s lab, Otto was granted early admission to the university.

Frank and Otto spent summers at Woods’ Hole, where they studied the biophysics of squid and crab neurons. Otto’s 1937 thesis was an attempt to reverse engineer the electrical signals from crab neurons. Eccles, Hodgkin and Huxley later won their 1963 Nobel Prize in Physiology and Medicine for similar studies in the giant squid axon.

Frank went on to MIT, where he chaired the first NIH-sponsored study group on Biophysics in 1955, and founded the Neuroscience Research Program (NRP), which set the national agenda for neuroscience research for decades. Otto eventually settled after World War II at the University of Minnesota. Francis describes both his and Otto’s career paths in his biography, Never-Ceasing Search.

Francis (Frank, Left) and Otto (Right) Schmitt, about 1974
Francis (Frank, Left) and Otto (Right) Schmitt, about 1974, around the time “Biomimetics” appeared in the Miriam-Webster Dictionary

Otto had spent his war years at the Airborne Instruments Laboratory, developing Magnetic Anomaly Detectors used by the Navy to detect German submarines. As a result of his Defense-related work, during the 1950s, he became involved in early man-space flight efforts.  The same year Eccles-Hodgkin-Huxley received their Nobel prize, Otto spoke at a Bionics conference in Dayton, saying this about the term he was later credited with creating, “biomimetics,” and how it might apply to manned-space flight:

“Presumably our common interest is in examining biological phenomenology in the hope of gaining insight and inspiration for developing physical or composite bio-physical systems in the image of life.”

Alex, What is Hysteresis?

While Otto’s 1938 paper, “A Thermionic Trigger” outlines the circuit’s design and applications, little detail is given on how he teased those hints out of crab nerves. For that, we have to go to a 1940 article, “Electric Interaction Between Two Adjacent Nerve Fibers.” which he wrote, while on a National Research Council-funded fellowship with A.V. Hill at University College of London. What Otto was going for with his Trigger, was to create a synthetic neuron.

By that point, based on his thesis work, as well as observations of Hodgkin’s work of the previous three years, it was clear that signals propagate along nerves in a wave-like fashion of excitation.  What was unclear was how this proceeds without causing noisy interference between adjacent nerves.

Otto would later compare nerve signals to the coaxial cables he was familiar with from his Navy work with radar and signals. What was required was a way to compare signals, set a threshold for signaling, as well as a way to make sure the nerve didn’t respond to adjacent noise:

“The possibility of such an interaction between separate, active and resting, units is of interest from several aspects. (i) Normally, local currents set up in the vicinity of an active region do not, and obviously must not, excite adjacent fibres. Some mechanism apparently is present by which, not only the further propagation of the impulse in the active fibre, but also its “isolated conduction” is ensured. (ii) A subthreshold effect of an action potential on an adjacent fibre must be expected, however, since some part of the local current is bound to penetrate the surrounding tissue.”

The result of Otto’s biologically-inspired tinkering was a self-adjusting, memory, comparator, and de-noiser, all in one elegant, square “hysteresis” curve, which is the symbol used for his trigger in cicruit designs.

By Alessio Damato [GFDL (http://www.gnu.org/copyleft/fdl.html), CC-BY-SA-3.0 (http://creativecommons.org/licenses/by-sa/3.0/) or CC BY-SA 2.5-2.0-1.0 (http://creativecommons.org/licenses/by-sa/2.5-2.0-1.0)], via Wikimedia Commons
Fastforward 75 Years

What is amazing about Schmitt’s Trigger is how far down into the molecular and atomic biological underpinnings this intelligent, self-adjusting circuit appears to have reached. As noted last week, recent advances from Japan in creating biomimetic structures have stemmed from studying the biophysics of microtubules, key components of the cell’s skeletal and signaling mechanisms.

Several of the AFOSR-sponsored grant publications and patents highlight similar aspects of microtubules, including references to them as “programmable switches,” as well as showing a square electronic hysteresis curve.  See, for instance, Figure 3 of “Multi-level memory switching properties of a single brain microtubule.”

In 2014, the Japan-based team spent a year at MIT, re-visiting nerve conduction studies, this time using biophysical tools that allow atomic-scale measurements of both the nerve membrane, but also of microtubules within the nerve. The data obtained are compelling, pointing to multiple levels of memory and feedback going on before, during and after nerve firing, and will appear this coming year.

But because microtubules appear to exist at the interface of physical and bioelectromagnetic forces, it is not surprising that they also have mechanical memory properties. A recent paper, “Why Microtubules Run in Circles: Mechanical Hysteresis of the Tubulin Lattice,” points out their role as both sensors and shapers of cellular forces, structures, and functions.

“Curved states can be induced via a mechanical hysteresis involving torques and forces typical of few molecular motors acting in unison. This lattice switch renders microtubules not only virtually unbreakable under typical cellular forces, but moreover provides them with a tunable response integrating mechanical and chemical stimuli…”

The relationship of microtubules to molecular motors and viral metastability was described in last week’s update. In the same way, mechanical metastability of microtubules allows them to “remember” force-induced curved shapes. These can arise, for instance, when molecular motors carry viruses and other packages along the microtubules. These properties resemble man-made shape-memory materials.

Maxwell, lo these many years

The point is, mechanical, and electromagnetic forces are working hand in hand. The interchangeability and exchangeability between intracellular, intramolecular forces, energy, resource use and intelligence almost smack of Maxwell’s now famous equations, recently highlighted as a cover story for IEEE’s Spectrum. The article points out that Maxwell’s equations’s did not immediately just leap into the fore–it took 50 years, and an autodidact Scottish telegrapher like Heaviside to reframe the equations into the form most high-school physics students now recognize:

Today, we learn early on that visible light is just one chunk of the wide electromagnetic spectrum, whose radiation is made up of oscillating electric and magnetic fields. And we learn that electricity and magnetism are inextricably linked; a changing magnetic field creates an electric field, and current and changing electric fields give rise to magnetic fields.

We have Maxwell to thank for these basic insights. But they didn’t occur to him suddenly and out of nowhere. The evidence he needed arrived in bits and pieces, over the course of more than 50 years.

Otto Schmitt, not one allergic to speculating a bit, said this of Maxwell, and alluding to more bits and pieces that may be finally starting to bubble up around the mind/brain biophysics:

I suspect that there is another whole layer of biomathematics dealing with these mental processes–radiation-like phenomena, but probably not just ordinary Maxwell’s Equations things… We traditionally think of electromagnetic fields as having an H vector and an E vector and that each generates the other… so I came up with this notion that there could be a non-orthogonal electromagnetic field which of course wouldn’t really bother with the shielded wires and so on, have different properties, and that Maxwell simply hadn’t run his mathematics into these special mathematics. (Harkness, 2002)

 Virology Update

The Dayton IEEE virology study section  is continuing with the Columbia MOOC, we are now focusing on viral genome lectures (#6, #7). There will probably be more to relate to the molecular design intelligence discussion, especially as we delve into the structures and mechanisms that genomes, both host and viral, have evolved to optimize their adaptive responses.

 

CETAS.Technology Intelligencer: Inaugural Edition


This is the inaugural edition of the CETAS.Technology Intelligencer. The intent is to provide intelligence on issues related to emergent socio-technical complexity.

The Road to Superintelligence(?) Last week Nick Bostrom gave a presentation at NOBLIS, titled “Superintelligence: Paths, Dangers and Strategies.” Bostrom is promoting his latest book, which traces multiple human trajectories, as they relate to potentially emerging technological and social superintelligence(s).

Bostrom envisions two attractor states for humanity’s future, of either “Cosmic Endowment,” or “Extinction,” with varying degrees of sustainability/viability in between.  Along those pathways, his institute spends a lot of time looking at the landscape of existential risks to humanity, and explores various causal pathways. Those pathways encompass a pool of potential technological discoveries, some of which could be potential “Black Swans.”

Bostrom’s point of “Differential Technology Development” suggests that in order to avoid tech Black Swans, some forms of tech should be pursued before others. Daniel Suarez, in his latest novel, Influx, explores a world in which potentially disruptive Black Swan technologies are pre-empted by a super-secretive Bureau of Technology Control.  They are then either co-opted for government use and planned release, or are actively suppressed.

The bulk of Bostrom’s book is about the dynamics of what would occur with development of a superintelligence, and what the variables in the kinetics of its development might be.  The most compelling part of Bostrom’s NOBLIS talk was where he pointed out the impact of “Differential Technology Development” as it could play out in emerging superintelligence(s).

For instance, he points out that hundreds of people worldwide are working on developing new machine-learning algorithms, and faster hardware. But only a handful of people are working on the control problem–how would you manage such a superintelligence?  Shouldn’t the control systems be in place before we get an incrementally self-improving Artificial General Intelligence?

Bostrom is not alone in worrying about the pace and potential social impact of emerging technologies. Elon Musk has called emerging superintelligence a potential “demon,” and in a deleted blog post, suggested such a danger could be only five years away. Deep-Mind only agreed to be purchased by Google when they set up a board of ethics to explore the potential impacts of where machine-learning could go in the hands of a data behemoth like Google.

Moore’s Law and Forks In The Road

One of the variables in Bostrom’s superintelligence trajectory is the competence issue–how fast will we get to a self-organizing, self-programming, self-improving AGI? We generally know that Moore’s Law has about tapped all of the potential advances of CMOS as a computational substrate. Varying estimates exist, but conservatively we are only 3-5 years away from the ultimate 3-7 nm feature limits of CMOS. DARPA and other funding agencies are actively looking for the next computational substrate, but so far, no clear candidates have emerged for next-generation hardware that will be able to sustain linearly-exponential doubling times of Moore’s Law. So, one of the wild-cards is, what’s the next substrate?

Another variable is that nobody REALLY knows what the computational capacity of a human brain is. The most widely-circulated estimate, as put forth by Ray Kurzweil in his book, The Singularity Is Near, is 2×1016 calculations per second. It turns out that number is based on the number of times a single neuron fires, around a thousand times/second, and that estimate does not differ from Von Neumann’s estimate of the 1950s.

Part of the reason for that conservative estimate is that the Hodgkin-Huxley view of the neuron as the basic “computational primitive” of the brain has also changed very little since the 1950s.  However, at a 2010 Google workshop on Quantum Biology, computational biophysicist Jack Tuszynski suggested that if you assume nanosecond timescale transitions at the cytoskeletal level, and that some kind of massively parallel computation is going on at the atomic-scale WITHIN the neuron, then the human brain does something more like 1028 calculations per second.

Viral Intelligence?

As I mentioned in my previous update, our local IEEE section currently has a study group of scientists and engineers who are sitting through Vincent Racaniello’s online Virology course. The multi-disciplinary group is made up of biologists, computer scientists, and engineers, all interested in developing novel, biologically-inspired malware detection and defense systems. We also think that understanding biophysical laws of supramolecular assembly used by viruses can be useful for engineering nanotech devices.

Last week’s HIV-Microtubulelectures included an observation that once biological viruses are internalized by the cell, most of them get carried to the nucleus along microtubules by molecular motors. Here, for example, is a picture of HIV virions, stained with Green Fluorescent Protein, making their way along the microtubule highway. Normally virions can’t be visualized optically, but the intense signal of the GFP makes this amazing image possible.

The details of viral coupling to molecular motors are only recently being realized, but one of the things that happens to the virus while it’s motoring along the microtubule is that it gets “uncoated,” which allows the nucleic acid payload to be delivered to the replication machinery in the host cell’s nucleus.

The microtubule-virus interaction is very exciting, especially to my colleagues at Japan’s National Institute of Material Science, who just published their latest biophysical observations of microtubules in Nature’s Scientific Reports. The latest paper is the fourth to acknowledge a 2010-2013 grant I awarded when I was Deputy Director of AFOSR’s Tokyo Detachment. The title of the original grant was “Information Processing in Single Microtubules.” What the latest paper shows is:

  • A “Common Frequency” of 3.77 MHz as a set-point between mechanical and electromagnetic oscillations for all tubulins—plant, animal and fungal. Except for cancer cells. They have no setpoint.
  • Data suggests a pattern-based, geometric language of biological assembly, using a chain of resonant vibrations across multiple timescales and frequency domains
  • These observations have also been used by the Japan team to build self-assembling, nano-scale dendrimers.

The idea of a “common frequency” for a protein begs the question of a possible fourth state of condensed matter–one where you might be able to bake “intelligence” into a molecule, along with a “resonance chain” that allows for massively parallel communication and coordination between and within cells.

In light of these observations, we think that we can drive viral-microtubule interactions and uncoating with the proper externally-applied electromagnetic frequency. We also believe that the same approach could be applied to untangling misfolded proteins like the neurofibrillary tangles seen in Alzheimer’s disease.  Several experiments along these lines are planned for 2015.

So this week’s virology lectures provided just as much key technical insight. In lecture #4, Racaniello describes viruses as molecular machines.  As such, the free-energy landscape is an important driver of their behaviors. One of the related keys to viral success is their “metastability.” Here metastable just means contextual stability.

What that means is, viruses need enough stability to protect their viral payload from external damage, but once they get to the target location inside the cell, the virus needs to be unstable enough to fall apart and release its genome. How does it “know” all that?

Our lab has been studying THz spectroscopy of viruses, and in collaboration with Peter Ortoleva’s group at Indiana University, we note several THz-scale resonant frequencies, and that the free-energy landscape is a big driver in viral capsid conformation. Here for instance are molecular dynamic models of a viral coat, at differing energy levels.

1cwp-swell-OPs.0006-Fig.3b1 1cwp-swell-OPs.0005

Raconiello compares the viral capsid to the Japanese toy bakuganwhich is a spring-loaded ball. When the parts are compressed together, it rolls along at a higher potential-energy state. But when it encounters the right signal, a magnet, the latches release, and the ball falls open.  As the images above show, the energy state on the right would be permissive for viral genome release.

One point Racaniello has made repeatedly is that they took great pains to eradicate any references in the notes or text that might imply anthropomorphic character to viruses. Viruses don’t “want” to do anything.

But of course, we still couldn’t resist the urge to ask, “do viruses exhibit adaptive/intelligent behavior?”

An Equation for Intelligence?

Certainly the Japanese microtubule and “brain jelly” molecular computing work begs the question of a “supramolecular assembly language.” So it would be really nice if we could find a simple, elegant way to measure intelligent/adaptive behavior at that scale. Turns out there may be such an equation!

Alex Wissner-Gross’s recent paper on Causal Entropy, and more accessible TED Talk outlines just such an equation, where “intelligence” is a measurable, physical force. One that resists entrapment, and seeks to optimize future options. The magnitude of that force is proportional to a directionality function that points toward optimum free-energy, available resources, and the ability to predict and optimize free-energy state over a certain timescale.

F = T  Sτ

What If?

So to loop this back to Bostrom’s idea of differential tech development, and focus on the control systems, what if we were to combine the Wissner-Gross intelligence equation with a grammar-driven language? And what if the Japanese work is correct in positing a frequency-fractal, geometric language of nested rhythms within biological systems, and that language has its own dictionary, design parameters, lexicon, grammar, and ontologies for assessing costs, risks and rewards?

Then that might start to look like a super-intelligent control system.

Let’s build that first.

Touching Viruses

Because the lab I’m a visiting scientist in has funding from NSF to develop new THz methods for detecting viruses and spores, I’ve been reviewing what’s happened in the 25 years since I last studied virology. Vince Racaniello’s online course and weekly podcast are awesome resources.

As a molecular toxicology guy, I used a number of virology-based tools and assays to study the impact of heavy metals on DNA. More recently as a program manager for AFOSR in Tokyo, I supported a lot of research at the nano-bio-info-cognitive nexus.

When you start looking at things on the nano-scale, then they really start to get interesting. Viruses are great examples of how nature has designed bottom-up nanotech, so expect to see a lot of updates from the CETAS realm touching viruses.

Just don’t stuck your finger in your nose after touching them.