Information

How are neural networks encoded in the DNA?

How are neural networks encoded in the DNA?


We are searching data for your request:

Forums and discussions:
Manuals and reference books:
Data from registers:
Wait the end of the search in all databases.
Upon completion, a link will appear to access the found materials.

The central nervous systems as well as the brain->muscles and sensory cells->brain nervous pathways, need to be precisely wired for life to be possible. Moreover they are wired almost exactly the same way for the individuals of a same species (animal, man, insects, etc.).

This information, must therefore be somehow encoded in the DNA.

I have never heard of any study or theory on how this would be possible - only about how proteins are encoded, and I couldn't find any bit of information on this topic (I am not a biologist though).

I would like to have a broad idea of what is currently know about the construction of such networks, i.e. not detailed explanation but rather takeaways facts and references.

  • How is information about neural connections encoded in the DNA?

e.g. are there any genes linked with neurons (e.g. coding for neurotransmitters)? Do the DNA code for precise neural connections? Can development of specific parts of the brain be linked to specific parts of the DNA? How does a neuron knows from the DNA to which neurotransmitter be sensitive?

  • How are connections between specific neurons established?

e.g. how an axon does know where to establish deliver a potential? How can the end of an axon move to the correct place?


Here's at least one paper describing some of what is known about development of neural network in Caenorhabditis elegans

http://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1001044

It mentions at least one relevant gene

" There is evidence showing that several guidance molecules like Netrin and Nerfin-1 are expressed in the early stages of development. Netrin is a protein involved in axonal guidance in vertebrates as well as in invertebrates [21], [22], [23] and is specifically known to influence early path-finding events [23], [24], [25]. Nerfin-1 belonging to a highly conserved family of Zn-finger proteins, is found transiently expressed in neuron precursors and plays a role in early path finding. Studies involving pioneering neurons in the central nervous system of Drosophila melanogaster have shown that Nerfin-1, whose expression is spatially and temporally regulated [25], is essential in early axonal guidance. "

You can also have a lot at the Wikipedia article called "Development_of_the_nervous_system_in_humans"

One interesting fact is that C. elegans has 20 000 genes, 302 neurons and about 5000 chemical synapses, 2000 neuromuscular junctions and some 500 gap junctions. In humans there are an estimated 20 000-25 000 genes in humans, about 85 billion neurons and 100 trillion synapses.

This excerpt from http://www.tandfonline.com/doi/full/10.3109/0954898X.2011.638968 explains this seeming paradox though

"studies in a range of species suggest that fundamental similarities, in spatial and topological features as well as in developmental mechanisms for network formation, are retained across evolution. 'Small-world' topology and highly connected regions (hubs) are prevalent across the evolutionary scale, ensuring efficient processing and resilience to internal (e.g. lesions) and external (e.g. environment) changes. Furthermore, in most species, even the establishment of hubs, long-range connections linking distant components, and a modular organization, relies on similar mechanisms."


Bio-inspired cryptosystem with DNA cryptography and neural networks

Bio-Inspired Cryptosystems are a modern form of Cryptography where bio-inspired and machine learning techniques are used for the purpose of securing data. A system has been proposed based on the Central Dogma of Molecular Biology (CDMB) for the Encryption and Decryption Algorithms by simulating the natural processes of Genetic Coding (conversion from binary to DNA bases), Transcription (conversion from DNA to mRNA) and Translation (conversion from mRNA to Protein) as well as the reverse processes to allow for encryption and decryption respectively. All inputs are considered to be in the form of blocks of 16-bits. The final outputs from the blocks can be concatenated to form the final cipher text in the form of protein bases. A Bidirectional Associative Memory Neural Network (BAMNN) has been trained using randomized data for key generation which is capable of saving memory space by remembering and regenerating the sets of keys in a recurrent fashion. The proposed bio-inspired cryptosystem shows competent encryption and decryption times even on large data sizes when compared with existing systems.


Explainable Artificial Intelligence for Decoding Regulatory Instructions in DNA

Researchers used DNA sequences from high-resolution experiments to train a neural network called BPNet, whose “black box” innerworkings were then uncovered to reveal sequence patterns and organizing principles of the genome’s regulatory code. Credit: Illustration courtesy of Mark Miller, Stowers Institute for Medical Research

Opening the black box to uncover the rules of the genome’s regulatory code.

Researchers at the Stowers Institute for Medical Research, in collaboration with colleagues at Stanford University and Technical University of Munich, have developed advanced explainable artificial intelligence (AI) in a technical tour de force to decipher regulatory instructions encoded in DNA. In a report published online on February 18, 2021, in Nature Genetics, the team found that a neural network trained on high-resolution maps of protein-DNA interactions can uncover subtle DNA sequence patterns throughout the genome and provide a deeper understanding of how these sequences are organized to regulate genes.

Neural networks are powerful AI models that can learn complex patterns from diverse types of data such as images, speech signals, or text to predict associated properties with impressive high accuracy. However, many see these models as uninterpretable since the learned predictive patterns are hard to extract from the model. This black-box nature has hindered the wide application of neural networks to biology, where interpretation of predictive patterns is paramount.

One of the big unsolved problems in biology is the genome’s second code—its regulatory code. DNA bases (commonly represented by letters A, C, G, and T) encode not only the instructions for how to build proteins, but also when and where to make these proteins in an organism. The regulatory code is read by proteins called transcription factors that bind to short stretches of DNA called motifs. However, how particular combinations and arrangements of motifs specify regulatory activity is an extremely complex problem that has been hard to pin down.

Now, an interdisciplinary team of biologists and computational researchers led by Stowers Investigator Julia Zeitlinger, PhD, and Anshul Kundaje, PhD, from Stanford University, have designed a neural network—named BPNet for Base Pair Network—that can be interpreted to reveal regulatory code by predicting transcription factor binding from DNA sequences with unprecedented accuracy. The key was to perform transcription factor-DNA binding experiments and computational modeling at the highest possible resolution, down to the level of individual DNA bases. This increased resolution allowed them to develop new interpretation tools to extract the key elemental sequence patterns such as transcription factor binding motifs and the combinatorial rules by which motifs function together as a regulatory code.

“This was extremely satisfying,” says Zeitlinger, “as the results fit beautifully with existing experimental results, and also revealed novel insights that surprised us.”

For example, the neural network models enabled the researchers to discover a striking rule that governs binding of the well-studied transcription factor called Nanog. They found that Nanog binds cooperatively to DNA when multiples of its motif are present in a periodic fashion such that they appear on the same side of the spiraling DNA helix.

“There has been a long trail of experimental evidence that such motif periodicity sometimes exists in the regulatory code,” Zeitlinger says. “However, the exact circumstances were elusive, and Nanog had not been a suspect. Discovering that Nanog has such a pattern, and seeing additional details of its interactions, was surprising because we did not specifically search for this pattern.”

“This is the key advantage of using neural networks for this task,” says Žiga Avsec, PhD, first author of the paper. Avsec and Kundaje created the first version of the model when Avsec visited Stanford during his doctoral studies in the lab of Julien Gagneur, PhD, at the Technical University in Munich, Germany.

“More traditional bioinformatics approaches model data using pre-defined rigid rules that are based on existing knowledge. However, biology is extremely rich and complicated,” says Avsec. “By using neural networks, we can train much more flexible and nuanced models that learn complex patterns from scratch without previous knowledge, thereby allowing novel discoveries.“

BPNet’s network architecture is similar to that of neural networks used for facial recognition in images. For instance, the neural network first detects edges in the pixels, then learns how edges form facial elements like the eye, nose, or mouth, and finally detects how facial elements together form a face. Instead of learning from pixels, BPNet learns from the raw DNA sequence and learns to detect sequence motifs and eventually the higher-order rules by which the elements predict the base-resolution binding data.

Once the model is trained to be highly accurate, the learned patterns are extracted with interpretation tools. The output signal is traced back to the input sequences to reveal sequence motifs. The final step is to use the model as an oracle and systematically query it with specific DNA sequence designs, similar to what one would do to test hypotheses experimentally, to reveal the rules by which sequence motifs function in a combinatorial manner.

“The beauty is that the model can predict way more sequence designs that we could test experimentally,” Zeitlinger says. “Furthermore, by predicting the outcome of experimental perturbations, we can identify the experiments that are most informative to validate the model.” Indeed, with the help of CRISPR gene editing techniques, the researchers confirmed experimentally that the model’s predictions were highly accurate.

Since the approach is flexible and applicable to a variety of different data types and cell types, it promises to lead to a rapidly growing understanding of the regulatory code and how genetic variation impacts gene regulation. Both the Zeitlinger Lab and the Kundaje Lab are already using BPNet to reliably identify binding motifs for other cell types, relate motifs to biophysical parameters, and learn other structural features in the genome such as those associated with DNA packaging. To enable other scientists to use BPNet and adapt it for their own needs, the researchers have made the entire software framework available with documentation and tutorials.

Reference: “Base-resolution models of transcription-factor binding reveal soft motif syntax” by Žiga Avsec, Melanie Weilert, Avanti Shrikumar, Sabrina Krueger, Amr Alexandari, Khyati Dalal, Robin Fropf, Charles McAnany, Julien Gagneur, Anshul Kundaje and Julia Zeitlinger, 18 February 2021, Nature Genetics.
DOI: 10.1038/s41588-021-00782-6

Other contributors to the study included Melanie Weilert, Sabrina Krueger, PhD, Khyati Dalal, Robin Fropf, PhD, and Charles McAnany, PhD, from Stowers and Avanti Shrikumar, PhD, and Amr Alexandari from Stanford University.

This work was supported in part by the Stowers Institute for Medical Research and the National Human Genome Research Institute (awards R01HG009674 and U01HG009431 to A.K. and R01HG010211 to J.Z.) and National Institute of General Medical Sciences (DP2GM123485 to A.K.) of the National Institutes of Health (NIH). Additional support included the German Bundesministerium für Bildung und Forschung (project MechML 01IS18053F to Z.A.) and a Stanford BioX Fellowship and Howard Hughes Medical Institute International Student Research Fellowship (to A.S). Sequencing was performed at the Stowers Institute for Medical Research and University of Kansas Medical Center Genomics Core supported by the NIH awards from the National Institute of Child Health and Human Development (U54HD090216), Office of the Director (Instrumentation S10OD021743), and National Institute of General Medical Sciences (COBRE P30GM122731). The content is solely the responsibility of the authors and does not necessarily represent the official views of the NIH.

Lay Summary of Findings

DNA is well known for encoding proteins. It also contains another code—a regulatory code—that directs when and where to make proteins in an organism. In a report published online February 18, 2021, in Nature Genetics, researchers from the lab of Julia Zeitlinger, PhD, an investigator at the Stowers Institute for Medical Research, and collaborators from Stanford University and Technical University of Munich describe how they have used explainable artificial intelligence to help decipher the genome’s regulatory code.

The researchers developed a neural network whose inner workings can be uncovered to reveal regulatory DNA sequence patterns and their higher-level organizing principles from high-resolution genomics data. The Zeitlinger Lab anticipates that the predictive models, rules, and maps generated using this type of approach will lead to a better understanding of natural and disease-associated genetic variation in regulatory regions of DNA.


How neural networks solve this problem

Artificial neural networks have become one of the most popular methods of predicting and searching for interconnections in biological systems. Such a network simulates, to a certain extent, biological neural networks in a brain and functions as a collection of connected computational units that are able to receive input data, transmit signals to each other, and generate a response. The more complex the architecture of this collection is, the more complex the neural network is and the more complex tasks it can learn to solve.

What’s interesting is that, when it comes to complex studies of genomes or other biological data, the researcher often needs not only to obtain predictions from the neural network but also to understand the stages of its learning process post factum. For instance, a neural network can find a pattern in the interaction of particular proteins and particular segments of DNA and learn to predict which new proteins will have similar properties. However, scientists will still need to figure out what exactly this discovered pattern is, as the neural network does not learn in the same way as people. It has a completely different logic and tracks the “research” in an alternative way.

Nowadays, the effective usage of neural networks in biology and medicine is just in its infancy. In a new article published in Cell, a group of MIT researchers led by James Collins said that they successfully screened millions of candidates for antibiotics using Deep Learning methods (a set of Machine Learning methods used by neural networks).

During the learning process, the neural network was trained to spot potential antibiotics among 2,335 molecules, the effect of which on the model bacterium — Escherichia coli — was well known. The chemical structure of each molecule was encoded using a set of numbers responsible for the interconnections between the atoms. The task of the neural network was to detect the motives in such structures, which were responsible for their antimicrobial activity.

Once the system learned to predict the properties of a substance based on the shape and composition of its molecule, it was granted access to several electronic chemical libraries of a much larger volume. These libraries contained more than a hundred million molecules in total, and the overwhelming majority of them had never been studied for their effect on bacterial cells.


Contents

The most established "classical" nature-inspired models of computation are cellular automata, neural computation, and evolutionary computation. More recent computational systems abstracted from natural processes include swarm intelligence, artificial immune systems, membrane computing, and amorphous computing. Detailed reviews can be found in many books . [8] [9]

Cellular automata Edit

A cellular automaton is a dynamical system consisting of an array of cells. Space and time are discrete and each of the cells can be in a finite number of states. The cellular automaton updates the states of its cells synchronously according to the transition rules given a priori. The next state of a cell is computed by a transition rule and it depends only on its current state and the states of its neighbors.

Conway's Game of Life is one of the best-known examples of cellular automata, shown to be computationally universal. Cellular automata have been applied to modelling a variety of phenomena such as communication, growth, reproduction, competition, evolution and other physical and biological processes.

Neural computation Edit

Neural computation is the field of research that emerged from the comparison between computing machines and the human nervous system. [10] This field aims both to understand how the brain of living organisms works (brain theory or computational neuroscience), and to design efficient algorithms based on the principles of how the human brain processes information (Artificial Neural Networks, ANN [11] ).

Evolutionary computation Edit

Evolutionary computation [13] is a computational paradigm inspired by Darwinian evolution.

An artificial evolutionary system is a computational system based on the notion of simulated evolution. It comprises a constant- or variable-size population of individuals, a fitness criterion, and genetically inspired operators that produce the next generation from the current one. The initial population is typically generated randomly or heuristically, and typical operators are mutation and recombination. At each step, the individuals are evaluated according to the given fitness function (survival of the fittest). The next generation is obtained from selected individuals (parents) by using genetically inspired operators. The choice of parents can be guided by a selection operator which reflects the biological principle of mate selection. This process of simulated evolution eventually converges towards a nearly optimal population of individuals, from the point of view of the fitness function.

The study of evolutionary systems has historically evolved along three main branches: Evolution strategies provide a solution to parameter optimization problems for real-valued as well as discrete and mixed types of parameters. Evolutionary programming originally aimed at creating optimal "intelligent agents" modelled, e.g., as finite state machines. Genetic algorithms [14] applied the idea of evolutionary computation to the problem of finding a (nearly-)optimal solution to a given problem. Genetic algorithms initially consisted of an input population of individuals encoded as fixed-length bit strings, the genetic operators mutation (bit flips) and recombination (combination of a prefix of a parent with the suffix of the other), and a problem-dependent fitness function. Genetic algorithms have been used to optimize computer programs, called genetic programming, and today they are also applied to real-valued parameter optimization problems as well as to many types of combinatorial tasks.

Estimation of Distribution Algorithm (EDA), on the other hand, are evolutionary algorithms that substitute traditional reproduction operators by model-guided ones. Such models are learned from the population by employing machine learning techniques and represented as Probabilistic Graphical Models, from which new solutions can be sampled [15] [16] or generated from guided-crossover. [17] [18]

Swarm intelligence Edit

Swarm intelligence, [19] sometimes referred to as collective intelligence, is defined as the problem solving behavior that emerges from the interaction of individual agents (e.g., bacteria, ants, termites, bees, spiders, fish, birds) which communicate with other agents by acting on their local environments.

Particle swarm optimization applies this idea to the problem of finding an optimal solution to a given problem by a search through a (multi-dimensional) solution space. The initial set-up is a swarm of particles, each representing a possible solution to the problem. Each particle has its own velocity which depends on its previous velocity (the inertia component), the tendency towards the past personal best position (the nostalgia component), and its tendency towards a global neighborhood optimum or local neighborhood optimum (the social component). Particles thus move through a multidimensional space and eventually converge towards a point between the global best and their personal best. Particle swarm optimization algorithms have been applied to various optimization problems, and to unsupervised learning, game learning, and scheduling applications.

In the same vein, ant algorithms model the foraging behaviour of ant colonies. To find the best path between the nest and a source of food, ants rely on indirect communication by laying a pheromone trail on the way back to the nest if they found food, respectively following the concentration of pheromones if they are looking for food. Ant algorithms have been successfully applied to a variety of combinatorial optimization problems over discrete search spaces.

Artificial immune systems Edit

Artificial immune systems (a.k.a. immunological computation or immunocomputing) are computational systems inspired by the natural immune systems of biological organisms.

Viewed as an information processing system, the natural immune system of organisms performs many complex tasks in parallel and distributed computing fashion. [20] These include distinguishing between self and nonself, [21] neutralization of nonself pathogens (viruses, bacteria, fungi, and parasites), learning, memory, associative retrieval, self-regulation, and fault-tolerance. Artificial immune systems are abstractions of the natural immune system, emphasizing these computational aspects. Their applications include computer virus detection, anomaly detection in a time series of data, fault diagnosis, pattern recognition, machine learning, bioinformatics, optimization, robotics and control.

Membrane computing Edit

Membrane computing investigates computing models abstracted from the compartmentalized structure of living cells effected by membranes. [22] A generic membrane system (P-system) consists of cell-like compartments (regions) delimited by membranes, that are placed in a nested hierarchical structure. Each membrane-enveloped region contains objects, transformation rules which modify these objects, as well as transfer rules, which specify whether the objects will be transferred outside or stay inside the region. Regions communicate with each other via the transfer of objects. The computation by a membrane system starts with an initial configuration, where the number (multiplicity) of each object is set to some value for each region (multiset of objects). It proceeds by choosing, nondeterministically and in a maximally parallel manner, which rules are applied to which objects. The output of the computation is collected from an a priori determined output region.

Applications of membrane systems include machine learning, modelling of biological processes (photosynthesis, certain signaling pathways, quorum sensing in bacteria, cell-mediated immunity), as well as computer science applications such as computer graphics, public-key cryptography, approximation and sorting algorithms, as well as analysis of various computationally hard problems.

Amorphous computing Edit

In biological organisms, morphogenesis (the development of well-defined shapes and functional structures) is achieved by the interactions between cells guided by the genetic program encoded in the organism's DNA.

Inspired by this idea, amorphous computing aims at engineering well-defined shapes and patterns, or coherent computational behaviours, from the local interactions of a multitude of simple unreliable, irregularly placed, asynchronous, identically programmed computing elements (particles). [23] As a programming paradigm, the aim is to find new programming techniques that would work well for amorphous computing environments. Amorphous computing also plays an important role as the basis for "cellular computing" (see the topics synthetic biology and cellular computing, below).

Morphological computing Edit

The understanding that the morphology performs computation is used to analyze the relationship between morphology and control and to theoretically guide the design of robots with reduced control requirements, has been used in both robotics and for understanding of cognitive processes in living organisms, see Morphological computation and . [24]

Cognitive computing Edit

Cognitive computing CC is a new type of computing, typically with the goal of modelling of functions of human sensing, reasoning, and response to stimulus, see Cognitive computing and . [25]

Cognitive capacities of present-day cognitive computing are far from human level. The same info-computational approach can be applied to other, simpler living organisms. Bacteria are an example of a cognitive system modelled computationally, see Eshel Ben-Jacob and Microbes-mind.

Artificial life Edit

Artificial life (ALife) is a research field whose ultimate goal is to understand the essential properties of life organisms [26] by building, within electronic computers or other artificial media, ab initio systems that exhibit properties normally associated only with living organisms. Early examples include Lindenmayer systems (L-systems), that have been used to model plant growth and development. An L-system is a parallel rewriting system that starts with an initial word, and applies its rewriting rules in parallel to all letters of the word. [27]

Pioneering experiments in artificial life included the design of evolving "virtual block creatures" acting in simulated environments with realistic features such as kinetics, dynamics, gravity, collision, and friction. [28] These artificial creatures were selected for their abilities endowed to swim, or walk, or jump, and they competed for a common limited resource (controlling a cube). The simulation resulted in the evolution of creatures exhibiting surprising behaviour: some developed hands to grab the cube, others developed legs to move towards the cube. This computational approach was further combined with rapid manufacturing technology to actually build the physical robots that virtually evolved. [29] This marked the emergence of the field of mechanical artificial life.

The field of synthetic biology explores a biological implementation of similar ideas. Other research directions within the field of artificial life include artificial chemistry as well as traditionally biological phenomena explored in artificial systems, ranging from computational processes such as co-evolutionary adaptation and development, to physical processes such as growth, self-replication, and self-repair.

All of the computational techniques mentioned above, while inspired by nature, have been implemented until now mostly on traditional electronic hardware. In contrast, the two paradigms introduced here, molecular computing and quantum computing, employ radically different types of hardware.

Molecular computing Edit

Molecular computing (a.k.a. biomolecular computing, biocomputing, biochemical computing, DNA computing) is a computational paradigm in which data is encoded as biomolecules such as DNA strands, and molecular biology tools act on the data to perform various operations (e.g., arithmetic or logical operations).

The first experimental realization of special-purpose molecular computer was the 1994 breakthrough experiment by Leonard Adleman who solved a 7-node instance of the Hamiltonian Path Problem solely by manipulating DNA strands in test tubes. [30] DNA computations start from an initial input encoded as a DNA sequence (essentially a sequence over the four-letter alphabet ), and proceed by a succession of bio-operations such as cut-and-paste (by restriction enzymes and ligases), extraction of strands containing a certain subsequence (by using Watson-Crick complementarity), copy (by using polymerase chain reaction that employs the polymerase enzyme), and read-out. [31] Recent experimental research succeeded in solving more complex instances of NP-complete problems such as a 20-variable instance of 3SAT, and wet DNA implementations of finite state machines with potential applications to the design of smart drugs.

One of the most notable contributions of research in this field is to the understanding of self-assembly. [33] Self-assembly is the bottom-up process by which objects autonomously come together to form complex structures. Instances in nature abound, and include atoms binding by chemical bonds to form molecules, and molecules forming crystals or macromolecules. Examples of self-assembly research topics include self-assembled DNA nanostructures [34] such as Sierpinski triangles [35] or arbitrary nanoshapes obtained using the DNA origami [36] technique, and DNA nanomachines [37] such as DNA-based circuits (binary counter, bit-wise cumulative XOR), ribozymes for logic operations, molecular switches (DNA tweezers), and autonomous molecular motors (DNA walkers).

Theoretical research in molecular computing has yielded several novel models of DNA computing (e.g. splicing systems introduced by Tom Head already in 1987) and their computational power has been investigated. [38] Various subsets of bio-operations are now known to be able to achieve the computational power of Turing machines [ citation needed ] .

Quantum computing Edit

A quantum computer [39] processes data stored as quantum bits (qubits), and uses quantum mechanical phenomena such as superposition and entanglement to perform computations. A qubit can hold a "0", a "1", or a quantum superposition of these. A quantum computer operates on qubits with quantum logic gates. Through Shor's polynomial algorithm for factoring integers, and Grover's algorithm for quantum database search that has a quadratic time advantage, quantum computers were shown to potentially possess a significant benefit relative to electronic computers.

Quantum cryptography is not based on the complexity of the computation, but on the special properties of quantum information, such as the fact that quantum information cannot be measured reliably and any attempt at measuring it results in an unavoidable and irreversible disturbance. A successful open air experiment in quantum cryptography was reported in 2007, where data was transmitted securely over a distance of 144 km. [40] Quantum teleportation is another promising application, in which a quantum state (not matter or energy) is transferred to an arbitrary distant location. Implementations of practical quantum computers are based on various substrates such as ion-traps, superconductors, nuclear magnetic resonance, etc. As of 2006, the largest quantum computing experiment used liquid state nuclear magnetic resonance quantum information processors, and could operate on up to 12 qubits. [41]

The dual aspect of natural computation is that it aims to understand nature by regarding natural phenomena as information processing. Already in the 1960s, Zuse and Fredkin suggested the idea that the entire universe is a computational (information processing) mechanism, modelled as a cellular automaton which continuously updates its rules. [3] [4] A recent quantum-mechanical approach of Lloyd suggests the universe as a quantum computer that computes its own behaviour, [5] while Vedral [42] suggests that information is the most fundamental building block of reality.

The universe/nature as computational mechanism is elaborated in, [6] exploring the nature with help of the ideas of computability, whilst, [7] based on the idea of nature as network of networks of information processes on different levels of organization, is studying natural processes as computations (information processing).

The main directions of research in this area are systems biology, synthetic biology and cellular computing.

Systems biology Edit

Computational systems biology (or simply systems biology) is an integrative and qualitative approach that investigates the complex communications and interactions taking place in biological systems. Thus, in systems biology, the focus of the study is the interaction networks themselves and the properties of biological systems that arise due to these networks, rather than the individual components of functional processes in an organism. This type of research on organic components has focused strongly on four different interdependent interaction networks: [43] gene-regulatory networks, biochemical networks, transport networks, and carbohydrate networks.

Gene regulatory networks comprise gene-gene interactions, as well as interactions between genes and other substances in the cell. Genes are transcribed into messenger RNA (mRNA), and then translated into proteins according to the genetic code. Each gene is associated with other DNA segments (promoters, enhancers, or silencers) that act as binding sites for activators or repressors for gene transcription. Genes interact with each other either through their gene products (mRNA, proteins) which can regulate gene transcription, or through small RNA species that can directly regulate genes. These gene-gene interactions, together with genes' interactions with other substances in the cell, form the most basic interaction network: the gene regulatory networks. They perform information processing tasks within the cell, including the assembly and maintenance of other networks. Models of gene regulatory networks include random and probabilistic Boolean networks, asynchronous automata, and network motifs.

Another viewpoint is that the entire genomic regulatory system is a computational system, a genomic computer. This interpretation allows one to compare human-made electronic computation with computation as it occurs in nature. [44]

A comparison between genomic and electronic computers
Genomic computer Electronic computer
Architecture changeable rigid
Components construction as-needed basis from the start
Coordination causal coordination temporal synchrony
Distinction between hardware and software No Yes
Transport media molecules and ions wires

In addition, unlike a conventional computer, robustness in a genomic computer is achieved by various feedback mechanisms by which poorly functional processes are rapidly degraded, poorly functional cells are killed by apoptosis, and poorly functional organisms are out-competed by more fit species.

Biochemical networks refer to the interactions between proteins, and they perform various mechanical and metabolic tasks inside a cell. Two or more proteins may bind to each other via binding of their interactions sites, and form a dynamic protein complex (complexation). These protein complexes may act as catalysts for other chemical reactions, or may chemically modify each other. Such modifications cause changes to available binding sites of proteins. There are tens of thousands of proteins in a cell, and they interact with each other. To describe such a massive scale interactions, Kohn maps [45] were introduced as a graphical notation to depict molecular interactions in succinct pictures. Other approaches to describing accurately and succinctly protein–protein interactions include the use of textual bio-calculus [46] or pi-calculus enriched with stochastic features. [47]

Transport networks refer to the separation and transport of substances mediated by lipid membranes. Some lipids can self-assemble into biological membranes. A lipid membrane consists of a lipid bilayer in which proteins and other molecules are embedded, being able to travel along this layer. Through lipid bilayers, substances are transported between the inside and outside of membranes to interact with other molecules. Formalisms depicting transport networks include membrane systems and brane calculi. [48]

Synthetic biology Edit

Synthetic biology aims at engineering synthetic biological components, with the ultimate goal of assembling whole biological systems from their constituent components. The history of synthetic biology can be traced back to the 1960s, when François Jacob and Jacques Monod discovered the mathematical logic in gene regulation. Genetic engineering techniques, based on recombinant DNA technology, are a precursor of today's synthetic biology which extends these techniques to entire systems of genes and gene products.

Along with the possibility of synthesizing longer and longer DNA strands, the prospect of creating synthetic genomes with the purpose of building entirely artificial synthetic organisms became a reality. Indeed, rapid assembly of chemically synthesized short DNA strands made it possible to generate a 5386bp synthetic genome of a virus. [49]

Alternatively, Smith et al. found about 100 genes that can be removed individually from the genome of Mycoplasma Genitalium. This discovery paves the way to the assembly of a minimal but still viable artificial genome consisting of the essential genes only.

A third approach to engineering semi-synthetic cells is the construction of a single type of RNA-like molecule with the ability of self-replication. [50] Such a molecule could be obtained by guiding the rapid evolution of an initial population of RNA-like molecules, by selection for the desired traits.

Another effort in this field is towards engineering multi-cellular systems by designing, e.g., cell-to-cell communication modules used to coordinate living bacterial cell populations. [51]

Cellular computing Edit

Computation in living cells (a.k.a. cellular computing, or in-vivo computing) is another approach to understand nature as computation. One particular study in this area is that of the computational nature of gene assembly in unicellular organisms called ciliates. Ciliates store a copy of their DNA containing functional genes in the macronucleus, and another "encrypted" copy in the micronucleus. Conjugation of two ciliates consists of the exchange of their micronuclear genetic information, leading to the formation of two new micronuclei, followed by each ciliate re-assembling the information from its new micronucleus to construct a new functional macronucleus. The latter process is called gene assembly, or gene re-arrangement. It involves re-ordering some fragments of DNA (permutations and possibly inversion) and deleting other fragments from the micronuclear copy. From the computational point of view, the study of this gene assembly process led to many challenging research themes and results, such as the Turing universality of various models of this process. [52] From the biological point of view, a plausible hypothesis about the "bioware" that implements the gene-assembly process was proposed, based on template guided recombination. [53] [54]

Other approaches to cellular computing include developing an in vivo programmable and autonomous finite-state automaton with E. coli, [55] designing and constructing in vivo cellular logic gates and genetic circuits that harness the cell's existing biochemical processes (see for example [56] ) and the global optimization of stomata aperture in leaves, following a set of local rules resembling a cellular automaton. [57]

  1. ^ G.Rozenberg, T.Back, J.Kok, Editors, Handbook of Natural Computing, Springer Verlag, 2012
  2. ^ A.Brabazon, M.O'Neill, S.McGarraghy. Natural Computing Algorithms, Springer Verlag, 2015
  3. ^ ab Fredkin, F. Digital mechanics: An informational process based on reversible universal CA. Physica D 45 (1990) 254-270
  4. ^ ab Zuse, K. Rechnender Raum. Elektronische Datenverarbeitung 8 (1967) 336-344
  5. ^ ab Lloyd, S. Programming the Universe: A Quantum Computer Scientist Takes on the Cosmos. Knopf, 2006
  6. ^ ab Zenil, H. A Computable Universe: Understanding and Exploring Nature as Computation. World Scientific Publishing Company, 2012
  7. ^ ab Dodig-Crnkovic, G. and Giovagnoli, R. COMPUTING NATURE. Springer, 2013
  8. ^ Olarius S., Zomaya A. Y., Handbook of Bioinspired Algorithms and Applications, Chapman & Hall/CRC, 2005.
  9. ^ de Castro, L. N., Fundamentals of Natural Computing: Basic Concepts, Algorithms, and Applications, CRC Press, 2006.
  10. ^ von Neumann, J. The Computer and the Brain. Yale University Press, 1958
  11. ^ Arbib, M., editor. The Handbook of Brain Theory and Neural Networks. MIT Press, 2003.
  12. ^ Rojas, R. Neural Networks: A Systematic Introduction. Springer, 1996
  13. ^ Bäck, T., Fogel, D., Michalewicz, Z., editors. Handbook of Evolutionary Computation. IOP Publishing, U.K., 1997
  14. ^ Koza, J. Genetic Programming: On the Programming of Computers by Means of Natural Selection. MIT Press, 1992
  15. ^ Pelikan, Martin Goldberg, David E. Cantú-Paz, Erick (1 January 1999). BOA: The Bayesian Optimization Algorithm. Proceedings of the 1st Annual Conference on Genetic and Evolutionary Computation - Volume 1. Gecco'99. pp. 525–532. ISBN9781558606111 .
  16. ^
  17. Pelikan, Martin (2005). Hierarchical Bayesian optimization algorithm : toward a new generation of evolutionary algorithms (1st ed.). Berlin [u.a.]: Springer. ISBN978-3-540-23774-7 .
  18. ^
  19. Thierens, Dirk (11 September 2010). The Linkage Tree Genetic Algorithm. Parallel Problem Solving from Nature, PPSN XI. pp. 264–273. doi:10.1007/978-3-642-15844-5_27. ISBN978-3-642-15843-8 .
  20. ^
  21. Martins, Jean P. Fonseca, Carlos M. Delbem, Alexandre C. B. (25 December 2014). "On the performance of linkage-tree genetic algorithms for the multidimensional knapsack problem". Neurocomputing. 146: 17–29. doi:10.1016/j.neucom.2014.04.069.
  22. ^ Engelbrecht, A. Fundamentals of Computational Swarm Intelligence. Wiley and Sons, 2005.
  23. ^ Dasgupta, D. editor. Artificial Immune Systems and Their Applications. Springer, 1998
  24. ^ de Castro, L., Timmis, J. Artificial Immune Systems: A New Computational Intelligence Approach. Springer, 2002.
  25. ^ Paun, G. Membrane Computing: An Introduction. Springer, 2002
  26. ^ Abelson, H., Allen, D., Coore, D., Hanson, C., Homsy, G., Knight Jr., T., Nagpal, R., Rauch, E., Sussman, G., Weiss, R. Amorphous computing. Communications of the ACM 43, 5 (May 2000), 74-82
  27. ^ Pfeifer, R. and Füchslin R. Morphological Computing. (starts at p.11), 2013
  28. ^ Pfeifer, R. and Bondgard, J. How the body shapes the way we think: a new view of intelligence. MIT Press, 2006
  29. ^ Langton, C., editor. Artificial Life. Addison-Wesley Longman, 1990
  30. ^ Rozenberg, G. and Salomaa, A. The Mathematical Theory of L Systems. Academic Press, 1980
  31. ^ Brooks. R. Artificial life: from robot dreams to reality. Nature 406 (2000), 945-947
  32. ^ Lipson, P., Pollack, J. Automatic design and manufacture of robotic lifeforms. Nature 406 (2000), 974-978
  33. ^ Adleman, L. Molecular computation of solutions to combinatorial problemsArchived 6 February 2005 at the Wayback Machine. Science 266 (1994), 1021-1024
  34. ^ Kari, L. DNA computing - the arrival of biological mathematics. The Mathematical Intelligencer 19, 2 (1997) 9-22
  35. ^ Fujibayashi, K., Hariadi, R., Park, S-H., Winfree, E., Murata, S. Toward reliable algorithmic self-assembly of DNA tiles: A fixed-width cellular automaton pattern. Nano Letters 8(7) (2007) 1791-1797.
  36. ^ Reif, J. and LaBean, T. Autonomous programmable biomolecular devices using self-assembled DNA nanostructures. Communications of the ACM 50, 9 (Sept. 2007), 46-53
  37. ^ Seeman, N. Nanotechnology and the double helix. Scientific American Reports, 17. 3 (2007), 30-39
  38. ^ Rothemund, P., Papadakis, N., Winfree, E. Algorithmic self-assembly of DNA Sierpinski triangles. PLoS Biology 2, 12 (December 2004)
  39. ^ Rothemund, P. Folding DNA to create nanoscale shapes and patterns. Nature 440 (2006) 297-302.
  40. ^ Bath, J., Turberfield, A. DNA nanomachines. Nature Nanotechnology 2 (May 2007), 275-284
  41. ^ Paun, G., Rozenberg, G., Salomaa, A. DNA Computing: New Computing Paradigms. Springer, 1998
  42. ^ Hirvensalo, M. Quantum Computing, 2nd Ed. Springer, 2004
  43. ^ Ursin, R. et al. Entanglemen-based quantum communication over 144km. Nature Physics 3 (2007) 481-486
  44. ^ Negrevergne, C. et al. Benchmarking quantum control methods on a 12-qubit system. Physical Review Letters 96:art170501, 2006
  45. ^ Vedral, V. [Decoding Reality: The Universe as Quantum Information]. Oxford University Press, 2010
  46. ^ Cardelli, L. Abstract machines of systems biologyArchived 19 April 2008 at the Wayback Machine Bulletin of the EATCS 93 (2007), 176-204
  47. ^ Istrail, S., De-Leon, B-T., Davidson, E. The regulatory genome and the computer. Developmental Biology 310 (2007), 187-195
  48. ^ Kohn, K. Molecular interaction map of the mammalian cell cycle control and DNA repair systems. Molecular Biology of the Cell 10(8) (1999) 2703-2734.
  49. ^ Nagasaki, M., Onami, S., Miyano, S., Kitano, H. Bio-calculus: its concept and molecular interaction [permanent dead link] . Genome Informatics 10 (1999) 133-143.
  50. ^ Regev, A., Shapiro, E. Cellular abstractions: Cells as computation. Nature 419 (2002) 343
  51. ^ Cardelli, L. Brane calculi: Interactions of biological membranes. In LNCS 3082, pages 257-280. Springer, 2005.
  52. ^ Smith, H., Hutchison III, C., Pfannkoch, C., and Venter, C. Generating a synthetic genome by whole genome assembly: X174 bacteriophage from synthetic oligonucleotides. PNAS 100, 26 (2003), 15440-15445.
  53. ^ Sazani, P., Larralde, R., Szostak, J. A small aptamer with strong and specific recognition of the triphosphate of ATP. Journal of the American Chemical Society, 126(27) (2004) 8370-8371
  54. ^ Weiss, R., Knight, Jr., T. Engineered communications for microbial robotics. In LNCS 2054, pages 1-16, Springer, 2001
  55. ^ Landweber, L. and Kari, L. The evolution of cellular computing: Nature's solution to a computational problem [permanent dead link] . Biosystems, 52, 1/3 (1999) 3-13.
  56. ^
  57. Angeleska, A. Jonoska, N. Saito, M. Landweber, L. (2007). "RNA-guided DNA assembly". Journal of Theoretical Biology. 248 (4): 706–720. doi:10.1016/j.jtbi.2007.06.007. PMID17669433.
  58. ^ Prescott, D., Ehrenfeucht, A., and Rozenberg, G. Template-guided recombination for IES elimination and unscrambling of genes in stichotrichous ciliates [dead link] . J. Theoretical Biology 222, 3 (2003), 323-330.
  59. ^ Nakagawa, H., Sakamoto, K., Sakakibara, Y. Development of an in vivo computer based on Escherichia Coli. In LNCS 3892, pages 203-212, Springer, 2006
  60. ^ Zabet NR, Hone ANW, Chu DF Design principles of transcriptional logic circuitsArchived 7 March 2012 at the Wayback Machine. In Artificial Life XII Proceedings of the Twelfth International Conference on the Synthesis and Simulation of Living Systems, pages 186-193. MIT Press, August 2010.
  61. ^
  62. Duran-Nebreda S, Bassel G (April 2019). "Plant behaviour in response to the environment: information processing in the solid state". Philosophical Transactions of the Royal Society B. 374 (1774): 20180370. doi:10.1098/rstb.2018.0370. PMC6553596 . PMID31006360.

This article was written based on the following references with the kind permission of their authors:

  • Lila Kari, Grzegorz Rozenberg (October 2008). "The Many Facets of Natural Computing". Communications of the ACM. 51 (10): 72–83. doi: 10.1145/1400181.1400200 .
  • Leandro Nunes de Castro (March 2007). "Fundamentals of Natural Computing: An Overview". Physics of Life Reviews. 4 (1): 1–36. Bibcode:2007PhLRv. 4. 1D. doi:10.1016/j.plrev.2006.10.002.

Many of the constituent research areas of natural computing have their own specialized journals and books series. Journals and book series dedicated to the broad field of Natural Computing include the journals International Journal of Natural Computing Research (IGI Global),Natural Computing (Springer Verlag), Theoretical Computer Science, Series C: Theory of Natural Computing (Elsevier), the Natural Computing book series (Springer Verlag), and the Handbook of Natural Computing (G.Rozenberg, T.Back, J.Kok, Editors, Springer Verlag).

  • Ridge, E. Kudenko, D. Kazakov, D. Curry, E. (2005). "Moving Nature-Inspired Algorithms to Parallel, Asynchronous and Decentralised Environments". Self-Organization and Autonomic Informatics (I). 135: 35–49. CiteSeerX10.1.1.64.3403 .
  • Swarms and Swarm Intelligence by Michael G. Hinchey, Roy Sterritt, and Chris Rouff,

For readers interested in popular science article, consider this one on Medium: Nature-Inspired Algorithms


The state of AI in 2020: Biology and healthcare's AI moment, ethics, predictions, and graph neural networks

Research and industry breakthroughs, ethics, and predictions. This is what AI looks like today, and what it's likely to look like tomorrow.

By George Anadiotis for Big on Data | October 12, 2020 -- 14:46 GMT (07:46 PDT) | Topic: Artificial Intelligence

The State of AI Report 2020 is a comprehensive report on all things AI . Picking up from where we left off in summarizing key findings, we continue the conversation with authors Nathan Benaich and Ian Hogarth. Benaich is the founder of Air Street Capital and RAAIS, and Hogarth is an AI angel investor and a UCL IIPP visiting professor.

Key themes we covered so far were AI democratization, industrialization, and the way to artificial general intelligence. We continue with healthcare and biology's AI moment, research and application breakthroughs, AI ethics, and predictions.

Primers

Biology and healthcare's AI moment

A key point discussed with Benaich and Hogarth was the democratization of AI: What it means, whether it applies, and how to compete against behemoths who have the resources it takes to train huge machine learning models at scale.

One of the ideas examined in the report is to take pre-existing models and fine-tune them to specific domains. Benaich noted that taking a large model, or a pre-trained model in one field, and moving it to another field can work to bootstrap performance to a higher level:

"As far as biology and healthcare are becoming increasingly digital domains with lots of imaging, whether that relates to healthcare conditions or what cells look like when they're diseased, compiling data sets to describe that and then using transfer learning from ImageNet into those domains has yielded much better performance than starting from scratch."

This, Benaich went on to add, plays into one of the dominant themes in the report: Biology -- in which Benaich has a background -- and healthcare have their AI moment. There are examples of startups at the cutting edge of R&D moving to production tackling problems in biology. An application area Benaich highlighted was drug screening:

"If I have a software product, I can generate lots of potential drugs that could work against the disease protein that I'm interested in targeting. How do I know out of the thousands or hundreds of thousands of possible drugs, which one will work? And assuming I can figure out which one might work, how do I know if I can actually make it?"

Beyond computer vision, Benaich went on to add, there are several examples of AI language models being useful in protein engineering or in understanding DNA, "essentially treating a sequence of amino acids that encode proteins or DNA as just another form of language, a form of strings that language models can interpret just in the same way they can interpret characters that spell out words."

The FDA published a new proposal to embrace the highly iterative and adaptive nature of AI systems in what they call a "total product lifecycle" regulatory approach built on good machine learning practices.

Transformer-based language models such as GPT3 have also been applied to tasks such as completing images or converting code between different programming languages. Benaich and Hogarth note that the transformer's ability to generalize is remarkable, but at the same time offer a word of warning in the example of code: No expert knowledge required, but no guarantees that the model didn't memorize the functions either.

This discussion was triggered by the question -- posed by some researchers -- whether progress in mature areas of machine learning is stagnant. In our view, the fact that COVID19 has dominated 2020 is also reflected in the impact it has had on AI. And there are examples of how AI has been applied in biology and healthcare to tackle COVID19.

Benaich used examples from biology and healthcare to establish that beyond research, the application area is far from stagnant. The report includes work in this area ranging from startups such as InVivo and Recursion to Google Health, DeepMind, and the NHS.

What's more, the US Medicaid and Medicare system has approved a medical imaging product for stroke that's based on AI. Despite pre-existing FDA approvals for deep-learning based medical imaging, whether that's for stroke, mammography, or broken bones, this is the only so far that has actually gotten reimbursement, noted Benaich:

"Many people in the field feel that reimbursement is the critical moment. That's the economic incentive for doctors to prescribe, because they get paid back. So we think that's a major event. A lot of work to be done, of course, to scale this and to make sure that more patients are eligible for that reimbursement, but still major nonetheless."

Interestingly, the FDA has also published a new proposal to embrace the highly iterative and adaptive nature of AI systems in what they call a "total product lifecycle" regulatory approach built on good machine learning practices.

Graph neural networks: Getting three-dimensional

The report also includes a number of examples that Benaich stated: "Prove that the large pharma companies are actually getting value from working with their first drug discovery companies." This discussion naturally leads to the topic of progress in a specific area of machine learning: graph neural networks.

The connection was how graph neural networks (GNNs) are used to enhance chemical property prediction and guide antibiotic drug screening, leading to new drugs in vivo. Most deep learning methods focus on learning from two-dimensional input data. That is, data represented as matrices. GNNs are an emerging family of methods that are designed to process 3D data. This may sound cryptic, but it's a big deal. The reason is that it enables more information to be processed by the neural network.

"I think it comes down to one topic, which is the right representation of biological data that actually expresses all the complexity and the physics and chemistry and living nuances of a biological system into a compact, easy to describe mathematical representation that a machine learning model can do something with," said Benaich.

Sometimes it's hard to conceptualize biological systems as a matrix, so it could very well be that we're just not exploiting all of that implicit information that resides in a biological system, he went on to add. This is why graphical representations are an interesting next step -- because it feels intuitive as a tool to represent something that is connected, such as a chemical molecule.

Graph neural networks enable the representation of 3-dimensional structures for deep learning. This mean being able to capture, and use, more information, and lends itself well to the field of biology. Image: M. Bronstein

Benaich noted examples in molecule property prediction and chemical synthesis planning, but also in trying to identify novel small molecules. Small molecules are treated as Lego building blocks. By using advances in DNA sequencing, all of these chemicals are mixed in a tube with a target molecule, and researchers can see what building blocks have assembled and bound to the target of interest.

When candidate small molecules that seem to work have been identified, GNNs can be used to try and learn what commonalities these building blocks have that make them good binders for the target of interest. Adding this machine learning layer to a standard and well-understood chemical screening approach gives a several-fold improvement on the baseline.

Hogarth on his part mentioned a recent analysis, arguing that GNNs, the transformer architecture, and attention-based methods used in language models share the same underlying logic, as you can think of sentences for the connected word graphs. Hogarth noted the way that the transform architecture is creeping into lots of unusual use cases, and how scaling it up is increasing the impact:

"The meta point around the neural networks and these attention-based methods, in general, is that they seem to represent a sort of a general enough approach that there's going to be progress just by continuing to hammer very hard on that nail for the next two years. And one of the ways in which I'm challenging myself is to assume that we might see a lot more progress just by doing the same thing with more aggression for a bit.

And so I would assume that some of the gains that have been found in these GNNs cross-pollinate with the work that's happening with language models and transformers. And that approach continues to be a very fertile area for the kind of super general, high-level AGI-like research."

AI ethics and predictions

There's a ton of topics we could pick to dissect from Benaich and Hogarth's work, such as the use of PyTorch overtaking TensorFlow in research, the boom in federated learning, the analysis on talent and retainment per geography, progress (or lack thereof) in autonomous vehicles, AI chips, and AutoML. We encourage readers to dive into the report to learn more. But we wrap up with something different.

Hogarth mentioned that the speculation phase in AI for biology and healthcare is starting, with lots of capital flowing. There are going to be some really amazing companies that come out of it, and we will start to see a real deployment phase kick in. But it's equally certain, he went on to add, there are going to be instances that will be revealed to be total frauds.

So, what about AI ethics? Benaich and Hogarth cite work by pioneers in the field, touching upon issues such as commercial gender classification, unregulated police facial recognition, the ethics of algorithms, and regulating robots. For the most part, the report focuses on facial recognition. Facial recognition is widespread the world over and has lead to controversy, as well as wrongful arrests. More thoughtful approaches seem to gather steam, Benaich and Hogarth note.

Hogarth referred to an incident in which a UK citizen claimed his human rights were breached when he was photographed while Christmas shopping. Although judges ruled against the claimant, they also established an important new duty for the police to make sure that discrimination is proactively "eliminated." This means that action on bias cannot be legally deferred until the tech has matured:

"This creates a much higher bar to deploying this software. And it creates almost a legal opportunity for anyone who experiences bias at the hands of an algorithm to have a foundation for suing the government or a private act of defiance technology," Hogarth said.

AI ethics often focuses on facial recognition, but there are more and more domains it's becoming relevant in.

MegaPixels: Origins, Ethics, and Privacy Implications of Publicly Available Face Recognition Image Datasets. © Adam Harvey / MegaPixels.cc

Hogarth also emphasized another approach, which he termed "API driven auditability." He referred to a new law passed in Washington State with active support from Microsoft. This law restricts law enforcement's use of facial recognition technology, by demanding that the software used must be accessible to an independent third party via an API to assess for "accuracy and unfair performance differences" across characteristics like race or gender.

Of course, even narrowing our focus on AI ethics, the list is endless: From bias to the use of technology in authoritarian regimes and/or for military purposes, AI nationalism, or the US tax code incentivizing replacing humans with robots, there's no shortage of causes for concern. Benaich and Hogarth, on their part, close their report by offering a number of predictions for the coming year:

The race to build larger language models continues and we see the first 10 trillion parameter model. Attention-based neural networks move from NLP to computer vision in achieving the state of the art results. A major corporate AI lab shuts down as its parent company changes strategy. In response to US DoD activity and investment in US-based military AI startups, a wave of Chinese and European defense-focused AI startups collectively raise over $100 million in the next 12 months.

One of the leading AI-first drug discovery startups (e.g. Recursion, Exscientia) either IPOs or is acquired for over $1 billion. DeepMind makes a major breakthrough in structural biology and drug discovery beyond AlphaFold. Facebook makes a major breakthrough in augmented and virtual reality with 3D computer vision. And NVIDIA does not end up completing its acquisition of Arm.

The record for predictions offered in last year's State of AI Report was pretty good - they made 5 out of 6. Let's see how this year's set of predictions fares.


The Unreasonable Effectiveness of Convolutional Neural Networks in Population Genetic Inference

Population-scale genomic data sets have given researchers incredible amounts of information from which to infer evolutionary histories. Concomitant with this flood of data, theoretical and methodological advances have sought to extract information from genomic sequences to infer demographic events such as population size changes and gene flow among closely related populations/species, construct recombination maps, and uncover loci underlying recent adaptation. To date, most methods make use of only one or a few summaries of the input sequences and therefore ignore potentially useful information encoded in the data. The most sophisticated of these approaches involve likelihood calculations, which require theoretical advances for each new problem, and often focus on a single aspect of the data (e.g., only allele frequency information) in the interest of mathematical and computational tractability. Directly interrogating the entirety of the input sequence data in a likelihood-free manner would thus offer a fruitful alternative. Here, we accomplish this by representing DNA sequence alignments as images and using a class of deep learning methods called convolutional neural networks (CNNs) to make population genetic inferences from these images. We apply CNNs to a number of evolutionary questions and find that they frequently match or exceed the accuracy of current methods. Importantly, we show that CNNs perform accurate evolutionary model selection and parameter estimation, even on problems that have not received detailed theoretical treatments. Thus, when applied to population genetic alignments, CNNs are capable of outperforming expert-derived statistical methods and offer a new path forward in cases where no likelihood approach exists.

Figures

Schematics of a standard feedforward…

Schematics of a standard feedforward neural network and two convolutional neural network designs…

Example population genetic alignments visualized…

Example population genetic alignments visualized as black-and-white images. An unsorted alignment matrix (left)…

The impact of input data reorganization on accuracy. We show the RMSE of…

Performance of classifiers for detecting…

Performance of classifiers for detecting introgression. We use confusion matrices to show the…

Accuracy of recombination rate estimates…

Accuracy of recombination rate estimates from LDhat and our CNN. ( A )…

Confusion matrices showing accuracies of…

Confusion matrices showing accuracies of two methods that seek to detect recent positive…

Accuracy of demographic inference CNN.…

Accuracy of demographic inference CNN. The scatterplots show the correlation between true and…


What are neural networks used for?

Photo: For the last two decades, NASA has been experimenting with a self-learning neural network called Intelligent Flight Control System (IFCS) that can help pilots land planes after suffering major failures or damage in battle. The prototype was tested on this modified NF-15B plane (a relative of the McDonnell Douglas F-15). Photo by Jim Ross courtesy of NASA.

On the basis of this example, you can probably see lots of different applications for neural networks that involve recognizing patterns and making simple decisions about them. In airplanes, you might use a neural network as a basic autopilot, with input units reading signals from the various cockpit instruments and output units modifying the plane's controls appropriately to keep it safely on course. Inside a factory, you could use a neural network for quality control. Let's say you're producing clothes washing detergent in some giant, convoluted chemical process. You could measure the final detergent in various ways (its color, acidity, thickness, or whatever), feed those measurements into your neural network as inputs, and then have the network decide whether to accept or reject the batch.

There are lots of applications for neural networks in security, too. Suppose you're running a bank with many thousands of credit-card transactions passing through your computer system every single minute. You need a quick automated way of identifying any transactions that might be fraudulent&mdashand that's something for which a neural network is perfectly suited. Your inputs would be things like 1) Is the cardholder actually present? 2) Has a valid PIN number been used? 3) Have five or more transactions been presented with this card in the last 10 minutes? 4) Is the card being used in a different country from which it's registered? &mdashand so on. With enough clues, a neural network can flag up any transactions that look suspicious, allowing a human operator to investigate them more closely. In a very similar way, a bank could use a neural network to help it decide whether to give loans to people on the basis of their past credit history, current earnings, and employment record.

Photo: Handwriting recognition on a touchscreen, tablet computer is one of many applications perfectly suited to a neural network. Each character (letter, number, or symbol) that you write is recognized on the basis of key features it contains (vertical lines, horizontal lines, angled lines, curves, and so on) and the order in which you draw them on the screen. Neural networks get better and better at recognizing over time.

Many of the things we all do everyday involve recognizing patterns and using them to make decisions, so neural networks can help us out in zillions of different ways. They can help us forecast the stockmarket or the weather, operate radar scanning systems that automatically identify enemy aircraft or ships, and even help doctors to diagnose complex diseases on the basis of their symptoms. There might be neural networks ticking away inside your computer or your cellphone right this minute. If you use cellphone apps that recognize your handwriting on a touchscreen, they might be using a simple neural network to figure out which characters you're writing by looking out for distinct features in the marks you make with your fingers (and the order in which you make them). Some kinds of voice recognition software also use neural networks. And so do some of the email programs that automatically differentiate between genuine emails and spam. Neural networks have even proved effective in translating text from one language to another.

Google's automatic translation, for example, has made increasing use of this technology over the last few years to convert words in one language (the network's input) into the equivalent words in another language (the network's output). In 2016, Google announced it was using something it called Neural Machine Translation (NMT) to convert entire sentences, instantly, with a 55&ndash85 percent reduction in errors. This is just one example of how Google deploys neural-network technology: Google Brain is the name it's given to a massive research effort that applies neural techniques across its whole range of products, including its search engine. It also uses deep neural networks to power the recommendations you see on YouTube, with models that "learn approximately one billion parameters and are trained on hundreds of billions of examples." [5]

All in all, neural networks have made computer systems more useful by making them more human. So next time you think you might like your brain to be as reliable as a computer, think again&mdashand be grateful you have such a superb neural network already installed in your head!


2 Answers 2

One incredibly important difference between humans and NNs is that the human brain is the result of billions of years of evolution whereas NNs were partially inspired by looking at the result and thinking ". we could do that" (utmost respect for Hubel and Wiesel).

Human brains (and in fact anything biological really) have an embedded structure to them within the DNA of the animal. DNA has about 4 MB of data and incredibly contains the information of where arms go, where to put sensors and in what density, how to initialize neural structures, the chemical balances that drive neural activation, memory architecture, and learning mechanisms among many many other things. This is phenomenal. Note, the placement of neurons and their connections isn't encoded in dna, rather the rules dictating how these connections form is. This is fundamentally different from simply saying "there are 3 conv layers then 2 fully connected layers. ". There has been some progress at neural evolution that I highly recommend checking out which is promising though.

Another important difference is that during "runtime" (lol), human brains (and other biological neural nets) have a multitude of functions beyond the neurons. Things like Glial cells. There are about 3.7 Glial cells for every neuron in your body. They are a supportive cell in the central nervous system that surround neurons and provide support for and insulation between them and trim dead neurons. This maintenance is continuous update for neural structures and allows resources to be utilized most effectively. With fMRIs, neurologists are only beginning to understand the how these small changes affect brains.

This isn't to say that its impossible to have an artificial NN that can have the same high level capabilities as a human. Its just that there is a lot that is missing from our current models. Its like we are trying to replicate the sun with a campfire but heck, they are both warm.

Comparing Unlike Objects

The comparison between a person and an artificial network cannot be made on an equal basis. The former is a composition of many things that the later is not.

Unlike an artificial network sitting in computer memory on a laptop or server, a human being is an organism, from head to toe, living in the biosphere and interacting with other human beings from birth.

Human Training

We have latent intelligence in the zygotes that met to form us and solidified as our genetic code during meiosis, but it is not yet trained. It cannot be until the brain grows from its first cells, directed by the genetic expressions of the brain's metabolic, sensory, cognitive, motor control, and immune structure and function. After nine months of growth, a newborn baby's intelligence is not yet exhibited in motion, language, or behavior other than to suck liquid food.

Our intelligence begins to emerge after initial basic behavioral training and does not reach the ability to pass a test indicating academic abilities until the corresponding stages of development in a family structure and components of education are complete. These are all observations well studied and documented by those in the field of developmental psychology.

Artificial Networks are Not Particularly Neural

An artificial network is a distant and distorted conceptual offspring of a now obsolete model of how neurons behave in networks. Even when the perceptron was first conceived, it was known that neurons reacted to activation from electrical pulses transmitted across synapses from other neurons arranged in complex micro-structures, not by performing an activation function to a vector-matrix product. The parameter matrix at the input of artificial neurons are summing attenuated signals, not electro-chemically reacting to pulses that may only be roughly aligned in time.

Since then, imaging and in vetro study of neurons are revealing the complexities of neuro-plasticity (genetically directed morphing of the network topology of neurons), the many varieties of cell types, the groupings of cells to form function geometrically, and the involvement of energy metabolism in the axon.

In the human brain, chemical pathways of dozens of compounds that regulate function and comprise global and regional states and the secretion, transmission, agonist and antagonist reception, interaction, and uptake of those components is under study. There is barely, if at all, an equivalent in the environment of the artificial networks deployed today, although nothing stops us from designing such regulation systems, and some of the most recent work has pushed the envelope in that direction.

Sexual Reproduction

Artificial networks are also not brains inside individuals produced by sexual reproduction, therefore potentially exhibiting in neurological capacity the best of two parents, or the worst. We do not yet spawn artificial networks from genetic algorithms, although that has been thought of and it is likely to be researched again.

Adjusting the Basis for Comparison

In short, the basis for comparison renders it meaningless, however, with some adjustment based on the above, another similar comparison can be considered that is meaningful and on a more equal basis.

What is the difference between a college student and an artificial network that has billions of artificial neurons, well configured and attached to five senses and motor control, integrated inside a humanoid robot that has been nurtured and educated like a member of a family and a community for eighteen years since its initial deployment?

We don't know. We can't even simulate such a robotic experience of eighteen years or properly project what might happen with scientific confidence. Many of the AI components of the above are not yet well developed. When they are &mdash and there is no particularly compelling reason to think they cannot &mdash then we will find out together.

Research that May Provide an Answer

From further cognitive science development, real time neuron level imaging, work on the genetic expressions out of which brains grow, artificial neuron designs will likely progress beyond perceptrons and the more temporally aware LSTM, B-LSTM, and GRU varieties and the topologies of neuron arrangements may break from their current Cartesian structural limitations.

The neurons in a brain are not arranged in orthogonal rows and columns. They form clusters that exhibit closed loop feedback at low structural levels. This can be simulated by a B-LSTM type artificial network cell, but any electrical engineer schooled in digital circuit design understands that simulation and realization are miles apart in efficiency. A signal processor can run thousands of times faster than its simulation.

From development of computer vision, hearing, tactile-motor coordination, olfactory sensing, materials science support, robotic packaging, and miniature power sources far beyond what lithium batteries can produce may come humanoid robots that can learn while interacting. At that time it would probably be easy to find a family that cannot have children that would adopt an artificial child.

Scientific Rigor

Progress in these areas is necessary for such a comparison to be made on a scientific basis and for confidence in the comparison results to be published and pass peer review by serious researchers not interested in media hype, making the right career moves, or hiking their company's stock prices.


4 Discussion

We have shown that carefully designed deep neural networks are capable of significantly improving the predictive power in protein-RNA binding experiments. By using different network architectures, and by incorporating structure information in the learning process, we outperformed the state-of-the-art results for this task. In particular, we have demonstrated the power of recurrent neural networks for the task of RNA-binding prediction. Regarding convolutional neural networks, our architecture benefits from a higher number of convolution filters, as well as from a mixture of different filter lengths. While adding convolution filters improved the prediction accuracy of the network, we did not experience such an improvement when adding more convolutional layers. This coincides with the results of ( Zeng et al., 2016), who explored CNNs for predicting protein-DNA binding.

The improvement was substantially noticeable for in vitro experiments. This is possibly due to the fact that in vitro experiments are designed to quantitatively measure protein-RNA binding for hundreds of thousands of synthetic RNA sequences. We believe these results demonstrate the usefulness of deep neural networks in the area of protein-RNA binding, and more generally in the field of computational biology, where they are starting to be used on a large scale.

A long-standing goal in the field of protein-RNA interaction is accurate prediction of in vivo binding. As demonstrated in this study, current computational methods perform rather poorly in predicting bound and unbound RNA transcripts (average AUCs around 0.65, and some predictions are even below 0.5, which corresponds to random guessing). We believe that learning the intrinsic binding preferences of an RNA-binding protein would not suffice in this case, as the in vivo environment is much more complex. Not only do proteins compete over the same binding sites or co-bind together, RNA structure also differs between in silico, in vitro and in vivo environments. On top of that, RNAcompete and other in vitro experiments measure binding to short RNA sequences (30–40 nt) ( Lambert et al., 2014 Ray et al., 2009), which cannot fold to complex RNA structures that are found in vivo, where transcripts span thousands of nucleotides. This alone already inhibits in vitro trained models from learning binding preferences to complex structures.

There are a number of questions worth pursuing following our work: Why were RNNs better than CNNs for in vitro data, but worse than them for in vivo data? Our training and test data were based on experiments where the binding between a single protein and numerous RNAs was measured. Can we design a DNN (or another ML mechanism) to train on many proteins and RNAs, and then to predict the binding of different proteins and RNAs? Another future line of research is to further improve the interpretability of the suggested networks. In particular, a better understanding of how the structure information is incorporated in the learning and prediction processes, and what filters are more dominant and why, may yield interesting biological insights.



Comments:

  1. Akin

    Strangely like that

  2. Mel

    I think, you will come to the correct decision.

  3. Reaghan

    This does not suit me at all.

  4. Dryden

    Excuse me, I have removed this phrase



Write a message