CETAS.Technology Intelligencer: Inaugural Edition


This is the inaugural edition of the CETAS.Technology Intelligencer. The intent is to provide intelligence on issues related to emergent socio-technical complexity.

The Road to Superintelligence(?) Last week Nick Bostrom gave a presentation at NOBLIS, titled “Superintelligence: Paths, Dangers and Strategies.” Bostrom is promoting his latest book, which traces multiple human trajectories, as they relate to potentially emerging technological and social superintelligence(s).

Bostrom envisions two attractor states for humanity’s future, of either “Cosmic Endowment,” or “Extinction,” with varying degrees of sustainability/viability in between.  Along those pathways, his institute spends a lot of time looking at the landscape of existential risks to humanity, and explores various causal pathways. Those pathways encompass a pool of potential technological discoveries, some of which could be potential “Black Swans.”

Bostrom’s point of “Differential Technology Development” suggests that in order to avoid tech Black Swans, some forms of tech should be pursued before others. Daniel Suarez, in his latest novel, Influx, explores a world in which potentially disruptive Black Swan technologies are pre-empted by a super-secretive Bureau of Technology Control.  They are then either co-opted for government use and planned release, or are actively suppressed.

The bulk of Bostrom’s book is about the dynamics of what would occur with development of a superintelligence, and what the variables in the kinetics of its development might be.  The most compelling part of Bostrom’s NOBLIS talk was where he pointed out the impact of “Differential Technology Development” as it could play out in emerging superintelligence(s).

For instance, he points out that hundreds of people worldwide are working on developing new machine-learning algorithms, and faster hardware. But only a handful of people are working on the control problem–how would you manage such a superintelligence?  Shouldn’t the control systems be in place before we get an incrementally self-improving Artificial General Intelligence?

Bostrom is not alone in worrying about the pace and potential social impact of emerging technologies. Elon Musk has called emerging superintelligence a potential “demon,” and in a deleted blog post, suggested such a danger could be only five years away. Deep-Mind only agreed to be purchased by Google when they set up a board of ethics to explore the potential impacts of where machine-learning could go in the hands of a data behemoth like Google.

Moore’s Law and Forks In The Road

One of the variables in Bostrom’s superintelligence trajectory is the competence issue–how fast will we get to a self-organizing, self-programming, self-improving AGI? We generally know that Moore’s Law has about tapped all of the potential advances of CMOS as a computational substrate. Varying estimates exist, but conservatively we are only 3-5 years away from the ultimate 3-7 nm feature limits of CMOS. DARPA and other funding agencies are actively looking for the next computational substrate, but so far, no clear candidates have emerged for next-generation hardware that will be able to sustain linearly-exponential doubling times of Moore’s Law. So, one of the wild-cards is, what’s the next substrate?

Another variable is that nobody REALLY knows what the computational capacity of a human brain is. The most widely-circulated estimate, as put forth by Ray Kurzweil in his book, The Singularity Is Near, is 2×1016 calculations per second. It turns out that number is based on the number of times a single neuron fires, around a thousand times/second, and that estimate does not differ from Von Neumann’s estimate of the 1950s.

Part of the reason for that conservative estimate is that the Hodgkin-Huxley view of the neuron as the basic “computational primitive” of the brain has also changed very little since the 1950s.  However, at a 2010 Google workshop on Quantum Biology, computational biophysicist Jack Tuszynski suggested that if you assume nanosecond timescale transitions at the cytoskeletal level, and that some kind of massively parallel computation is going on at the atomic-scale WITHIN the neuron, then the human brain does something more like 1028 calculations per second.

Viral Intelligence?

As I mentioned in my previous update, our local IEEE section currently has a study group of scientists and engineers who are sitting through Vincent Racaniello’s online Virology course. The multi-disciplinary group is made up of biologists, computer scientists, and engineers, all interested in developing novel, biologically-inspired malware detection and defense systems. We also think that understanding biophysical laws of supramolecular assembly used by viruses can be useful for engineering nanotech devices.

Last week’s HIV-Microtubulelectures included an observation that once biological viruses are internalized by the cell, most of them get carried to the nucleus along microtubules by molecular motors. Here, for example, is a picture of HIV virions, stained with Green Fluorescent Protein, making their way along the microtubule highway. Normally virions can’t be visualized optically, but the intense signal of the GFP makes this amazing image possible.

The details of viral coupling to molecular motors are only recently being realized, but one of the things that happens to the virus while it’s motoring along the microtubule is that it gets “uncoated,” which allows the nucleic acid payload to be delivered to the replication machinery in the host cell’s nucleus.

The microtubule-virus interaction is very exciting, especially to my colleagues at Japan’s National Institute of Material Science, who just published their latest biophysical observations of microtubules in Nature’s Scientific Reports. The latest paper is the fourth to acknowledge a 2010-2013 grant I awarded when I was Deputy Director of AFOSR’s Tokyo Detachment. The title of the original grant was “Information Processing in Single Microtubules.” What the latest paper shows is:

  • A “Common Frequency” of 3.77 MHz as a set-point between mechanical and electromagnetic oscillations for all tubulins—plant, animal and fungal. Except for cancer cells. They have no setpoint.
  • Data suggests a pattern-based, geometric language of biological assembly, using a chain of resonant vibrations across multiple timescales and frequency domains
  • These observations have also been used by the Japan team to build self-assembling, nano-scale dendrimers.

The idea of a “common frequency” for a protein begs the question of a possible fourth state of condensed matter–one where you might be able to bake “intelligence” into a molecule, along with a “resonance chain” that allows for massively parallel communication and coordination between and within cells.

In light of these observations, we think that we can drive viral-microtubule interactions and uncoating with the proper externally-applied electromagnetic frequency. We also believe that the same approach could be applied to untangling misfolded proteins like the neurofibrillary tangles seen in Alzheimer’s disease.  Several experiments along these lines are planned for 2015.

So this week’s virology lectures provided just as much key technical insight. In lecture #4, Racaniello describes viruses as molecular machines.  As such, the free-energy landscape is an important driver of their behaviors. One of the related keys to viral success is their “metastability.” Here metastable just means contextual stability.

What that means is, viruses need enough stability to protect their viral payload from external damage, but once they get to the target location inside the cell, the virus needs to be unstable enough to fall apart and release its genome. How does it “know” all that?

Our lab has been studying THz spectroscopy of viruses, and in collaboration with Peter Ortoleva’s group at Indiana University, we note several THz-scale resonant frequencies, and that the free-energy landscape is a big driver in viral capsid conformation. Here for instance are molecular dynamic models of a viral coat, at differing energy levels.

1cwp-swell-OPs.0006-Fig.3b1 1cwp-swell-OPs.0005

Raconiello compares the viral capsid to the Japanese toy bakuganwhich is a spring-loaded ball. When the parts are compressed together, it rolls along at a higher potential-energy state. But when it encounters the right signal, a magnet, the latches release, and the ball falls open.  As the images above show, the energy state on the right would be permissive for viral genome release.

One point Racaniello has made repeatedly is that they took great pains to eradicate any references in the notes or text that might imply anthropomorphic character to viruses. Viruses don’t “want” to do anything.

But of course, we still couldn’t resist the urge to ask, “do viruses exhibit adaptive/intelligent behavior?”

An Equation for Intelligence?

Certainly the Japanese microtubule and “brain jelly” molecular computing work begs the question of a “supramolecular assembly language.” So it would be really nice if we could find a simple, elegant way to measure intelligent/adaptive behavior at that scale. Turns out there may be such an equation!

Alex Wissner-Gross’s recent paper on Causal Entropy, and more accessible TED Talk outlines just such an equation, where “intelligence” is a measurable, physical force. One that resists entrapment, and seeks to optimize future options. The magnitude of that force is proportional to a directionality function that points toward optimum free-energy, available resources, and the ability to predict and optimize free-energy state over a certain timescale.

F = T  Sτ

What If?

So to loop this back to Bostrom’s idea of differential tech development, and focus on the control systems, what if we were to combine the Wissner-Gross intelligence equation with a grammar-driven language? And what if the Japanese work is correct in positing a frequency-fractal, geometric language of nested rhythms within biological systems, and that language has its own dictionary, design parameters, lexicon, grammar, and ontologies for assessing costs, risks and rewards?

Then that might start to look like a super-intelligent control system.

Let’s build that first.

4 comments

  1. anirban says:

    Universe is a nested rhythm and all futuristic technologies will have a nested rhythm approach. Eagerly waiting for a series of industrial products based on nested rhythms.

Leave a Reply

Your email address will not be published. Required fields are marked *


1 × one =

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>