What is the relationship between the e value reported by HMMER and BLAST? (Are they equivalent?)

What is the relationship between the e value reported by HMMER and BLAST? (Are they equivalent?)

We are searching data for your request:

Forums and discussions:
Manuals and reference books:
Data from registers:
Wait the end of the search in all databases.
Upon completion, a link will appear to access the found materials.

Both HMMER and BLAST report an e value for alignments. Is it calculated in the same way and - assuming that default settings are used - can they be compared directly (are the equivalent)? If not equivalent, what settings can be changed to make them equivalent?

They use very different algorithms so they should not have the same E-value. BLAST uses a position-independent substitution matrix (e.g. BLOSUM) while HMMER uses a position-dependent substitution matrix that is different for every profile. Check out the "Background and Brief History" section in the HMMER's user guide. It explains the conceptual differences at a very broad level.

Therefore, I don't think there's any way to make them equivalent.

E-value, in short, is a statistical measure of how likely it is that your sequence of interest could have arisen by pure chance. This definition is 'discipline-wide', so should be adhered to by any bioinformatics software that elects to provide it.

Thus, the longer and more complex your sequence is, the better (lower) E-value it will have when conducting sequence alignments.

As far as I am aware, they should be directly comparable. I don't see why they also wouldn't be calculated in exactly the same way.

HMMERs website does state that it utilises 2 different kinds of E-Value however:

The are then two E-values for the domain:

Conditional E-value - This is the E-value that the inclusion and reporting significant thresholds that are measured against (if defined as E-values). The conditional E-value is an attempt to measure the statistical significance of each domain, given that it has already been decided that the target sequence is a true homolog. It is the expected number of additional domains or hits that would be found with a domain/hit score this big in the set of sequences reported in the top hits list, if those sequences consisted only of random nonhomologous sequence outside the region that sufficed to define them as homologs.

Independent E-value - This is the significance of the sequence in the whole database search, if this were the only domain/hit that had been identified. If this E-value is not good, but the full sequence E-value is good, this is a potential red flag. Weak hits, none of which are good enough on their own, are summing up to lift the sequence up to a high score.

The calculation for E-Value via BLAST can be found here:

Defining a Core Genome for the Herpesvirales and Exploring their Evolutionary Relationship with the Caudovirales

The order Herpesvirales encompasses a wide variety of important and broadly distributed human pathogens. During the last decades, similarities in the viral cycle and the structure of some of their proteins with those of the order Caudovirales, the tailed bacterial viruses, have brought speculation regarding the existence of an evolutionary relationship between these clades. To evaluate such hypothesis, we used over 600 Herpesvirales and 2000 Caudovirales complete genomes to search for the presence or absence of clusters of orthologous protein domains and constructed a dendrogram based on their compositional similarities. The results obtained strongly suggest an evolutionary relationship between the two orders. Furthermore, they allowed to propose a core genome for the Herpesvirales, composed of 4 proteins, including the ATPase subunit of the DNA-packaging terminase, the only protein with previously verified conservation. Accordingly, a phylogenetic tree constructed with sequences derived from the clusters associated to these proteins grouped the Herpesvirales strains accordingly to the established families and subfamilies. Overall, this work provides results supporting the hypothesis that the two orders are evolutionarily related and contributes to the understanding of the history of the Herpesvirales.


Biochemical networks underlie essentially all cellular functions [1, 2]. Proteins do not act alone. Instead, they connect with each other to form pathways, such as the MAP kinase cascades and the glycolysis pathway. The connections are often direct physical protein-protein interactions or enzyme-substrate relationships. They can also be indirect ones. For instance, metabolic enzymes are usually connected through a chain of biochemical reactions they catalyze, even though the enzymes may not be physically associated with each other. And pathways in turn join together to form networks, such as the signaling and the metabolic networks. It is via such networks that genomic information gives rise to cellular functions and genotypes are translated into phenotypes. Biochemical network models have thus long served effectively as platforms for analysis of high-throughput experimental data, e.g., microarray or next generation sequencing based gene expression data [3𠄵].

A prominent category of constituents in biochemical networks is proteins encoded by duplicate genes, also termed paralogs [6]. Duplicate genes arose from genomic duplication events, which can be whole-genome duplication (WGD) or small-scale duplication (SSD). Genomic duplication is a major driving force of biological evolution [6𠄸]. Proteins of duplicate genes are thus abundant in biochemical networks. Moreover, their abundance increases along with genomic complexity, which is quantified by genome size, gene number, abundance of spliceosomal introns and mobile genetic elements, from bacterial to uni-cellular eukaryotes, to multi-cellular species [9]. Proteins of duplicate genes function and evolve in biochemical networks [10, 11]. Duplicate gene evolution is frequently analyzed in the context of biochemical networks, such as the protein-protein interaction networks [12�] and the metabolic networks [15, 16], as well as other biological networks [17, 18].

A critical issue is gene duplicability. This term captures the selective gene duplication pattern universally observed in sequenced genomes [19�]. A small portion of the genes in a genome has extraordinarily high duplicate counts, while the vast majority either are singletons or has only a few duplicates. In other words, a small number of gene families are selectively expanded during the genomic evolution process. Quantitatively, this phenomenon is often described by a power-law relationship between the number of genes (P(K)) with K duplicates and the duplicate count K, P(k) ∝ k -α , with α as a positive constant. This relationship holds true regardless of which duplicate gene detection methods were used FASTA, BLAST, as well as protein domain based methods have all been used [19, 22�]. Moreover, this relationship holds true in bacterial, unicellular eukaryotic and multicellular genomes, and changes in the value of α can be used to quantify enrichment of duplicate genes as genomic complexity increases [22]. We operationally define gene duplicability, as popularly done, as the number of duplicates a gene has or the size of the gene family in a genome [25�], although slightly different definitions also exist [29].

How and why did the selective gene duplicability pattern described above emerge? Two seemingly contradictive factors should contribute significantly: the opportunity to derive novel genetic materials from existing ones and the need to minimize deleterious effects of gene duplication. The first is the evolutionary advantage that genomic duplication confers to a species. A gene in the duplicated regions would have two copies. Subsequently, the pair of duplicate genes would accumulate mutations. Very often, one of the two duplicates formed a pseudo-gene, and became silenced [6, 30]. More importantly, the mutations sometimes led to functional diversification, either neo- or sub-functionalization, between the pair [7, 22, 31, 32]. This divergence can be in spatial-temporal expression patterns, interaction partners, enzymatic specificities of their proteins or subcellular locations of their proteins, etc. On the other hand, gene duplicability is limited, as postulated by the gene balance hypothesis, by the second factor – the potential detrimental effects of gene duplication due to disruption of the stoichiometric balance between protein products of duplicated and non-duplicated genes [28, 33, 34]. For instance, specific ratios among subunits are required for formation of protein complexes, which are major components of biochemical networks. Unless the genes for every subunit are all duplicated, a genomic duplication event would disrupt the balance. Rapid neo- or sub-functionalization between the two duplicates would restore the stoichiometric balance and alleviate this gene dosage constrain, thus enhancing gene duplicability. For instance, in multi-cellular genomes, enhanced functional diversification through accumulation of introns has been associated with higher duplicate gene survival rates [9, 35].

Thus, functional diversification of duplicate genes not only promotes genomic functional innovation, but also alleviates potential deleterious effect of gene duplication. It is very likely that selective gene family expansion and enhanced diversification within the expanding families proceeded inextricably hand-in-hand. In other words, duplicate genes in larger gene families should have diverged from each other to a higher extent than those in smaller families. For the sake of consistency with the usage of 𠇍uplicability” to refer to the propensity of a gene to be duplicated (duplication rate and duplicate survivability) [27], we use the term 𠇍iversifiability” as its sister term to refer to the propensity of duplicate genes to undergo diversification (neo- or sub-functionalization). Similar to duplicability being operationally computed as the number of duplicate a gene has or the size of the duplicate gene family, diversifiability can be computed as the degree of diversification among duplicate genes. We hypothesized positive correlations between gene duplicability and diversifiability.

Testing the hypothesis requires quantifying diversifiability of duplicate genes. Three metrics were used in this study. Two of them were developed in the context of biochemical network one measures the extent to which duplicate genes diverge sufficiently for their proteins to participate in mutually antagonizing pathways in a network, the other the pair-wise shortest network distance among the proteins of duplicate genes. As the third metric, a protein sequence homology based clustering coefficient was used to quantify sequence divergence among duplicate genes. We report, for each of the three metrics, positive correlation between gene duplicability and diversifiability.

8 Answers 8

A collection of thoughts on this:

  • Most of the history of Stack Overflow has been defined by a frantic search for "that one weird trick" that would bring, not profitability, but wildly inflated growth. Y'know the sort of thing that gets a tiny company bought for a billion or so. That. Leads to some bad, short-term thinking. Led. Led to some bad, short-term thinking. Lots of other factors can also lead to bad, short-term thinking of course. But at least there's one less factor in play now.
  • This new owner is already involved in the software educational space. Which is sorta like saying they're in the "selling pickaxes to prospectors" space. That's not a bad fit for SO, or rather, it could be a good fit for SO. But, these sorts of things can easily become more parasitic than symbiotic. It is crucial to watch for that balance to tip. Be aware: it may tip for you, personally long before it tips for everyone.
  • No one is going to talk about changing plans right now. No one can change plans right now. Best for everyone if "stay the course" is the consistent message. This announcement is a start, not a conclusion. Expect plans to change significantly shortly after this finalizes, with more dramatic changes coming as current management cashes out over time. What will change is anyone's guess. Assume that there's a reason for the purchase, a rather significant and unique bit of value that the new owner sees and wishes to nurture. Probably also safe to assume it isn't Teams in its current form.
  • This feels obvious, but. Expect more discontinuity in the folks who work for the company as they interact with y'all. Lotta folks already left over the past few years, and that's been felt - well, change in ownership tends to not make that stop happening. This isn't just about who you talk to when you email support it can have subtle and far-reaching effects on what gets done, how it gets done, and how that is communicated. Those effects can be good - I think we've all seen some positive aspects of some recent turnovers - but, there will be change.
  • More than anything, the site is now a Product. That's not a euphemism someone bought it, it's a product, they bought it for a purpose and just as the man who buys a steak knife but needing a saw will see that 2x4 cut through even if it takes all night. The Product will serve its buyer's purpose. The site we saw spring into being 13 years ago, whose first and primary purpose was to facilitate communication between programmers. That's not what this is anymore. It might continue to serve that role - I sure hope so! - but that is officially, definitively not its defining reason for existence.

Stack Overflow is all grown up. Like CodeGuru, like LiveJournal, like GitHub, like so many others. Its adolescent innocence is gone, and what happens now must happen with intent and purpose. We'll have to see what that ultimately means.


  1. I have no insider knowledge. This is just speculation and opinion
  2. There's a possibility that I'll benefit financially from this sale. So, y'know, take the positive stuff above with a grain of salt.
  3. There's a possibility that I won't benefit at all from this sale. So, y'know, take the negative stuff above as sour grapes.
  4. Whoah - did you see that flash in the sky? Was that a plane, or. A satellite?
  5. Trying to predict the future is always fun, and always pointless. Wars and rumors of wars.

The TLDR is not much this year. We have our current strategy, roadmaps and plans for this year and continue to be focused on those. It’s business as usual, as it says in our blog post, we would be operating independently. The leadership team is staying, including me. Most of the company just found out about this today and many are in shock and excited about the future. Prosus is very community-focused and excited about what you all have built. As we start to plan for 2022, I think we will see more opportunities to invest in our public platform sites and community. I will be publishing my State of the Stack blog and meta post this month and will go into more detail there.

For now, if you want to know more, we have a bit more detail on our blog post

The pessimist case goes something like this:

The new company spent a huge sum of money on Stack Overflow and wants results. That'll mean selling us back our answers, charging a membership fee or cranking up ads. Whatever they decide to do, it will prioritize making money over the community who made these sites valuable.

Obviously that could happen. If so, that would be disappointing to say the least. Still, the question is what does a company expect from a $1.8 billion investment?

Prosus is a holding company. Probably the best known holding company is Warren Buffett's Berkshire Hathaway. (Disclosure: I'm an investor in Berkshire. I also have vested options in Stack Overflow from my time as an employee.) Joel Spolsky's blog post suggests Stack Overflow will be allowed to operate independently, which is not unusual for a holding company. The core competency of a holding company is allocating resources to the most promising subsidiaries rather than actively managing them.

Very likely Stack Overflow will go into the "Ventures" division which includes:

The company's annual report gives us some clues about their intentions:

Ventures is about building the next wave of growth for the group. We invest with a long-term vision in mind but make sure to tether that vision to short- and medium-term operating realities around risks, competitive dynamics, future capital needs, and other considerations. Our capital commitments are commensurate with this balanced assessment. Over time, as we build our understanding and expertise, the amounts invested may grow substantially. A good example of this approach is Food Delivery, which was nurtured as part of Ventures before becoming a standalone core segment last year.—Martin Tschopp COO, Ventures

Cutting through the business speak, this means the Ventures segment hopes to invest in subsidiaries so that they become profitable in future years. For the past few years, Stack Overflow investors have been looking for an exit. Now the investors are looking to build value. It's a real change in incentives that probably will be a net positive for people who benefit from the information bound up in the network. (That means us!)

It's important to recall that CC BY-SA contributions can be used by third parties. Anyone can download the data and create their own resource. So the $1.8 billion isn't about the content. Instead, the value comes from:

  1. the brand,
  2. proprietary code,
  3. employees who continue to work for the company and
  4. the connection to a community of contributors.

In my opinion, this acquisition reduces the odds that Stack Overflow will go dark. There's an incentive to keep the investment in play even if "short- and medium-term operating realities around risks, competitive dynamics, future capital needs, and other considerations" keep Prosus from increasing their investment.

Now there are still risks. For one thing, none of the announcements mention Stack Exchange. Nothing says the new owner will be any better at recognizing the potential of non-coding Q&A than the current owner. I'm also mildly worried about a line from the Prosus press release:

"There is an opportunity to connect more deeply with their community through our other education platforms to further fulfill their learning needs."—Larry Illg, CEO of EdTech at Prosus

Still, the influence is just as likely to go the other way. By coincidence, I interviewed a CM at Brainly a few weeks ago. As we talked I kept thinking there's a potential harmony between a site dedicated to homework help (Brainly) and a network that is suspicious of homework questions (Stack Exchange). Again, holding companies are better than most at allowing subsidiaries some measure of autonomy.

I suspect a large multinational corporation will be an improvement for non-US employees and international Stack Overflows. (Maybe be a little more friendly with China.)

Of all the options (IPO, buyout or status quo) this seems the best outcome. Management no longer needs to put lipstick on the pig. They won't need to spend time looking for investors. Their performance won't be tied to quarterly growth (at least not immediately). Nobody will need to worry about the company making payroll. I don't know a whole lot about Prosus, but it seems like as good a home as any.

The object of differential analysis is to find those compounds that show abundance difference between experiment groups, thereby signifying that they may be involved in some biological process of interest to the researcher. Due to chance, there will always be some difference in abundance between groups. However, it is the size of this difference in comparison to the variance (i.e. the range over which abundance values fall) that will tell us if this abundance difference is significant or not. Thus, if the difference is large but the variance is also large, then the difference may not be significant. On the other hand, a small difference coupled with a very small variance could be significant. We use Anova tests to formalise this calculation. The tests return a p-value that takes into account the mean difference and the variance and also the sample size. The p-value is a measure of how likely you are to get this compound data if no real difference existed. Therefore, a small p-value indicates that there is a small chance of getting this data if no real difference existed and therefore you decide that the difference in group abundance data is significant. By small we usually mean a probability of 0.05.

What are q-values, and why are they important?

False positives

A positive is a significant result, i.e. the p-value is less than your cut off value, normally 0.05. A false positive is when you get a significant difference where, in reality, none exists. As I mentioned above, the p-value is the chance that this data could occur given no difference actually exists. So, choosing a cut off of 0.05 means there is a 5% chance that we make the wrong decision.

The multiple testing problem

When we set a p-value threshold of, for example, 0.05, we are saying that there is a 5% chance that the result is a false positive. In other words, although we have found a statistically significant result, there is, in reality, no difference in the group means. While 5% is acceptable for one test, if we do lots of tests on the data, then this 5% can result in a large number of false positives. For example, if there are 2000 compounds in an experiment and we apply an Anova or t-test to each, then we would expect to get 100 (i.e. 5%) false positives by chance alone. This is known as the multiple testing problem.

Multiple testing and the False Discovery Rate

While there are a number of approaches to overcoming the problems due to multiple testing, they all attempt to assign an adjusted p-value to each test or reduce the p-value threshold from 5% to a more reasonable value. Many traditional techniques such as the Bonferroni correction are too conservative in the sense that while they reduce the number of false positives, they also reduce the number of true discoveries. The False Discovery Rate approach is a more recent development. This approach also determines adjusted p-values for each test. However, it controls the number of false discoveries in those tests that result in a discovery (i.e. a significant result). Because of this, it is less conservative that the Bonferroni approach and has greater ability (i.e. power) to find truly significant results.

Another way to look at the difference is that a p-value of 0.05 implies that 5% of all tests will result in false positives. An FDR adjusted p-value (or q-value) of 0.05 implies that 5% of significant tests will result in false positives. The latter will result in fewer false positives.


Q-values are the name given to the adjusted p-values found using an optimised FDR approach. The FDR approach is optimised by using characteristics of the p-value distribution to produce a list of q-values. In what follows, I will tie up some ideas and hopefully this will help clarify what we have been saying about p and q values.

It is usual to test many hundreds or thousands of compound variables in a metabolomics experiment. Each of these tests will produce a p-value. The p-values take on a value between 0 and 1 and we can create a histogram to get an idea of how the p-values are distributed between 0 and 1. Some typical p-value distributions are shown below. On the x-axis, we have histogram bars representing p-values. Each bar has a width of 0.05 and so in the first bar (red or green) we have those p-values that are between 0 and 0.05. Similarly, the last bar represents those p-values between 0.95 and 1.0, and so on. The height of each bar gives an indication of how many values are in the bar. This is called a density distribution because the area of all the bars always adds up to 1. Although the two distributions appear quite different, you will notice that they flatten off towards the right of the histogram. The red (or green) bar represents the significant values, if you set a p-value threshold of 0.05.

Now, the q-values are simply a set of values that will lie between 0 and 1. Also, if you order the p-values used to calculate the q-values, then the q-values will also be ordered. This can be seen in the following screen shot from Progenesis CoMet notice that q-values can be repeated:

To interpret the q-values, you need to look at the ordered list of q-values. There are 3516 compounds in this experiment. If we take unknown compound 1723 as an example, we see that it has a p-value of 0.0101 and a q-value of 0.0172. Recall that a p-value of 0.0101 implies a 1.01% chance of false positives, and so with 3516 compounds, we expect about 36 false positives, i.e. 3516 × 0.0101 = 35.51. In this experiment, there are 800 compounds with a value of 0.0101 or less, and so 36 of these will be false positives.

On the other hand, the q-value is 0.0172, which means we should expect 1.72% of all the compounds with q-value less than this to be false positives. This is a much better situation. We know that 800 compounds have a q-value of 0.0172 or less and so we should expect 800 × 0.0172 = 13.76 false positives rather than the predicted 36. Just to reiterate, false positives according to p-values take all 3516 values into account when determining how many false positives we should expect to see while q-values take into account only those tests with q-values less than the threshold we choose. Of course, it is not always the case that q-values will result in less false positives, but what we can say is that they give a far more accurate indication of the level of false positives for a given cut-off value.

When doing lots of tests, as in a metabolomics experiment, it is more intuitive to interpret p and q values by looking at the entire list of values in this way rather that looking at each one independently. In this way, a threshold of 0.05 has meaning across the entire experiment. When deciding on a cut-off or threshold value, you should do this from the point of view of how many false positives will this result in, rather than just randomly picking a p- or q-value of 0.05 and saying that every compound with p-value less this is significant.

Tools for identification

UNITE includes several tools that aid in the identification of unknown sequences. As unequivocal identification is the main purpose behind UNITE, the implementation of tools that extend beyond simple similarity searches (as offered by blast and variations thereof) was an essential part of the database development. This requirement has been met by the development of galaxie ( Nilsson et al., 2004 ), which allows web-based, basic phylogenetic analyses. Galaxie provides maximum parsimony heuristic and neighbour-joining analyses under different evolutionary models. To date, two galaxie (galaxie blast , galaxieHMM) and one blast script have been implemented. We recommend galaxie blast as the most appropriate tool for identification of unknown ITS sequences. Other identification methods will be considered for inclusion in the future. We stress again that the UNITE database is, in its present form, restricted to ITS sequences specific to ECM fungi, and the input of query sequences from other fungi not covered by the database (saprophytic or parasitic fungi) is not recommended for obvious reasons. An overview of the identification process and data retrieval is presented in Fig. 1.

Composite of screen shots displaying some features of UNITE. (a) Within the UNITE home page, ITS sequences can be subjected to either a blast search against the database or further phylogenetic analyses through the galaxie blast or galaxieHMM module. (b,c,d) In the blast search and galaxie results pages, the best matches and terminal taxa, respectively, have interactive links to the specimen data. (e) Species names on the final output page have interactive links to the species description if available. (f–i) Species description includes text as well as illustrative material, and links to the nomenclatural database Cortbase.

Galaxie blast

galaxie blast uses the incoming sequence as a query in a blast n search against the UNITE data. Either the 15 best matches (as judged by the e value) or the best three matches followed by the next 12 matches of mutually distinct e values are collected the latter option can be used to reduce the impact of identical sequences on the analysis. The inclusion of too many identical sequences in a phylogenetic analysis is likely to result in a tree with little, if any, resolution. Such trees are generally considered unsafe for inferring sequence relatedness. The matches and the query sequence are aligned in clustal W ( Thompson et al., 1994 ) for joint phylogenetic analysis in phylip ( Felsenstein, 1993 ), using either neighbour joining or the parsimony optimality criterion. The phylogenetic analyses feature random sequence addition, outgroup rooting, bootstrapping and branch swapping (parsimony). The results from blast and the phylogenetic tree are displayed together with the multiple alignment.


The hmmer package ( Eddy, 1998 ) is used to compute hidden Markov models (HMMs) for prealigned nucleotide matrices. At present there are four such alignments: one for the Hymenoscyphus ericae (Ascomycetes, Helotiales) aggregate and three for the resupinate thelephoroid fungi (genera Pseudotomentella, Thelephora, Tomentella and Tomentellopsis). The query sequence is compared with each of the HMMs if it produces a significant match with any of the alignments, the (best) significantly matching alignment is selected for phylogenetic analysis, the procedure of which is similar to that of galaxie blast . To obtain threshold values to signify match vs nonmatch, 30 ingroup and 30 outgroup sequences (where applicable) were matched to the HMMs and appropriate cut-off values were decided. The use of manually adjusted alignments opens up the possibility of using galaxieHMM to obtain very accurate estimates of a sequence's position within a group of sequences, notably at the genus or family level. We are in the process of adding more generic alignments.

Blast similarity search

For the sequence similarity search the program blast n ( Altschul et al., 1997 ) is incorporated into UNITE. This freeware program is based on the blast algorithm and used by NCBI and many other databases. The default E (expectation) value threshold for the similarity search is set to 1e−80. The stringency of the default E value represents an attempt to rule out weakly matching sequences.


In the last decades, due to its effectiveness and reasonable cost, chloroquine has represented the best and more widely used antimalarial drug. Unfortunately, within a decade of its introduction, P. falciparum parasite resistance to chloroquine was observed in most of the malaria-endemic countries. Nowadays, insurgence of resistance against chloroquine is a considerable hurdle for malaria control [1].

In its erythrocyte stage, P. falciparum invades the red blood cells where it forms a lysosomal isolated acidic compartment known as the digestive vacuole (DV). In the erythrocyte, the parasite grows by ingesting haemoglobin from the host cell cytosol and depositing it in the DV, where the protein is degraded to its component peptides and heme, which is incorporated into the inert and harmless crystalline polymer hemozoin [2].

Chloroquine is a diprotic weak base and, at physiological pH (∼7.4), can be found in its un-protonated (CQ), mono-protonated (CQ + ) and di-protonated (CQ ++ ) forms. The uncharged chloroquine is the only membrane permeable form of the molecule and it freely diffuses into the erythrocyte up to the DV. In this compartment, chloroquine molecules become protonated and, since membranes are not permeable to charged species, the drug accumulates into the acidic digestive vacuole [3], [4] where it is believed to bind haematin, a toxic byproduct of the haemoglobin proteolysis [5], [6], preventing its incorporation into the haemozoin crystal [2], [7], [8], [9], [10]. The free haematin seems to interfere with the parasite detoxification processes and thereby damage the plasmodium membranes [11].

Chloroquine sensitive parasites (CQS) accumulate much more chloroquine in the DV than chloroquine resistant strains (CQR) [4], [12], [13]. Recent studies have associated the reduced chloroquine accumulation observed in the parasite vacuole of resistant strains [12] with point mutations in the gene encoding for the P. falciparum chloroquine resistance transporter (PfCRT) protein (for a review see [14], [15]). PfCRT is localized in the digestive vacuole membrane and contains 10 predicted membrane-spanning domains [16], [17]. CQR phenotype isolates have all been found to carry the PfCRT critical charge-loss mutation K76T or, in two single cases, K76N or K76I [18], [19], [20], [21]. Another mutation, S163R, restores the chloroquine sensitivity of CQR parasites [22], [23]. The K76T amino acid mutation might allow the interaction of PfCRT with the positively charged chloroquine (CQ + or CQ ++ ) and allow its exit from the vacuole, with the net result of decreasing the chloroquine concentration within the DV [16], [24]. The single amino acid change S163R, by reintroducing a positive charge, is thought to block the leak of charged chloroquine from the DV, thus restoring chloroquine sensitivity [22], [23]. In a recent work, Martin and collaborators [25] were able to express both wild-type and resistant forms of PfCRT on the surface of Xenopus laevis oocytes and clearly demonstrated that chloroquine resistance is due to the direct transport of a protonated form of the drug out of the parasite vacuole via the K76T PfCRT mutant. Interestingly, they also showed that the introduction of the K76T single mutation in PfCRT of CQS parasites is necessary but not sufficient for the transport of chloroquine via PfCRT. These evidences are however compatible with two alternative models for PfCRT [26]: (1) the channel model (i.e. a passive channel that enables charged chloroquine to leak out of the food vacuole down its electrochemical gradient) or (2) the carrier model (i.e. an active efflux carrier extruding chloroquine from the food vacuole).

Several experimental set-ups have been used to answer the question of whether PfCRT is a channel or a carrier, namely measures of chloroquine accumulation, trans-stimulation and measures of chloroquine efflux. However the available data have been interpreted in different ways by different authors and the debate about the nature of PfCRT is still ongoing.

Sanchez and colleagues showed that chloroquine accumulation is energy dependent in both CQR and CQS [27]. These authors monitored the time course of labeled chloroquine uptake in the absence and in the presence of glucose. Glucose was added 20 min after choloroquine addition (i.e. when the stationary state was reached). They found that, after glucose addition, the time courses of choroquine uptake were markedly different in CQS and CQR: chloroquine accumulated to an increased extent in the CQS strain, but decreased in the CQR strain. A similar experiment was repeated by the same authors in 2004 [28] using a broader range of different antimalarial drugs. The authors concluded that the data are compatible with most models that attempt to account for chloroquine resistance and that some energy-dependent mechanism leads to loss of chloroquine from CQR cells and to its accumulation in CQS cells.

Bray et al [29], in 2006, measured the Cellular Accumulation Ratio (CAR) of chloroquine in six experimental conditions, namely in sensitive and resistant strains, in the absence and presence of carbonylcyanide p-trifluoromethoxyphenylhydrazone (FCCP), a ionophoric uncoupling agent, and in the absence and presence of glucose. In particular they found that, in absence of glucose, chloroquine the CAR is equal in CQS and CQR strains (∼700), reaching a level that is approximately intermediate between that observed in CQS (∼1200) and CQR (∼350) strains in the presence of glucose. They used several different Plasmodium strains and showed that, in the absence of FCCP, i.e. when the pH of the vacuole is lower than the external pH, the chloroquine CAR is three to four times higher (about 1200 versus about 350) in sensitive strains with respect to resistant strains, while addition of FCCP abolishes the differences leading to a CAR value of about 700 in both cases. They also demonstrate that, in the absence of glucose, the CAR is identical to that obtained in the presence of FCCP suggesting that the energy provided by the glucose is needed to maintain the pH difference between the cytoplasm and the DV. According to the authors, the hypothesis that PfCRT is an active efflux carrier does not appear to fully explain their findings. In this hypothesis, in fact, a single mutation would transform an energy-dependent chloroquine uptake process in an energy-dependent chloroquine efflux process. Therefore they favor the hypothesis that the chloroquine movement through PfCRT is not an active process.

Trans-stimulation of labeled chloroquine ([ 3 H]-CQ)) uptake after the parasites were pre-loaded with increasing concentrations of unlabelled chloroquine [27], [28], [29], [30] was observed in CQR strains and not in CQS isolates. Sanchez and collaborators conclude that the trans-stimulation phenomenon is unequivocally characteristic of saturable, carrier-mediated transport systems [31]. On the other hand, Bray and colleagues [29] propose that the trans-stimulation data reported by Sanchez et al cannot by themselves be used to conclude whether the chloroquine transport is in the inward or outward direction: stimulation of [ 3 H]-CQ uptake could indeed be due to acceleration of the transporter cycle by the outgoing unlabelled chloroquine or, as Sanchez et al assert, it could result from reduced efflux of [ 3 H]-CQ due to the carrier competitive inhibition from the pre-loaded unlabelled CQ. In the latter model, labeled and unlabelled chloroquine should be on the same side of the membrane when they interact with the carrier, i.e. they are mixed together. In order to verify this hypothesis, Bray et al [29] incubated CQR lines with premixed chloroquine (labeled and unlabelled), but did not observe trans-stimulation of chloroquine uptake, thus suggesting that labeled and unlabelled chloroquine must be on opposite sides of the membrane for the trans-stimulation effect to take place, i.e. transport of unlabelled chloroquine via the carrier would be in the outward direction while labeled chloroquine transport would occur in the inward direction in other words, mutant PfCRT would act as a bidirectional carrier, which is not compatible with an active efflux pump. In particular, these authors conjecture that trans-stimulation results might also be explained in terms of a gated channel.

Many authors measured chloroquine efflux from CQS and CQR isolates [27], [29], [32], [33] under different conditions: in presence and absence of glucose, with or without proton gradient uncoupling, with or without Verapamil, an L-type calcium channel blocker of the Phenylalkylamine class. The results of these experiments have been interpreted in different ways by different authors and did not lead to a consensus view about the nature of PfCRT.

Sanchez et al [34] studied the kinetics of chloroquine efflux in ‘reverse varying-trans’ conditions [31] from CQR and CQS isolates. This procedure investigates whether extracellular unlabelled chloroquine would stimulate the release of pre-loaded [ 3 H]-CQ. These authors expected that, in the presence of an active carrier, trans chloroquine should increase the initial efflux rate. They found an increasing initial efflux rate for both CQR and CQS lines and accordingly proposed that both CQR and CQS parasites possess a carrier of chloroquine with different transport properties.

It should appear clear from this survey that qualitative interpretations of the experimental findings are insufficient to draw conclusions about the nature of PfCRT and that more quantitative analyses are required.

A quantitative model cannot be derived from transient experiments because kinetic parameters, such as the rate of the vacuole pH equilibration during chloroquine uptake and the kinetic constants of chloroquine-hemozoine binding, are unknown and impossible to extract from the available data in an unambiguous way. The only data that can be used to derive a quantitative model without making an unreasonable number of hypotheses on the unknown parameters are those measured at equilibrium.

As we will show, the analytical model that we developed and used here indicates that equilibrium data are compatible with both the carrier and channel model for PfCRT, which explains why they could be interpreted differently by different authors. On the other hand, the carrier and channel hypotheses are only compatible with specific assumptions on the protonation state of the transported species and of the species binding to haeme of haeme-related molecules in the vacuole. For example, a carrier model is only compatible with the data if the transported molecule is protonated.

Another route to understand the nature of the macromolecule is to study its evolutionary relationship with other proteins of known function. Also in this case, data interpretation is controversial. Previous computational analyses of PfCRT [14], [16] suggested that PfCRT belongs to the drug/metabolite trasporter (DMT) superfamily, whereas other studies proposed that it resembles to ClC chloride channels [35].

Here, we use state-of-the-art bioinformatics tools to identify PfCRT homology relationships and provide evidence that it is indeed a member of the DMT superfamily This finding is also supported by the observation that a three-dimensional model of the protein based on a DMT-like fold is consistent with experimental data about the mutations involved in insurgence and reversion of CQ resistance in Plasmodium.

By combining this latter conclusion with the results of the analytical method, we propose that PfCRT is a carrier of CQ + , CQ ++ or both and that either all chloroquine species or only the uncharged one can bind hame of hame related species inside the vacuole.

Materials and methods

The non-redundant (NR) database of protein sequences (National Center for Biotechnology Information, NIH, Bethesda) was searched using the BLASTP program [11]. Profile searches were conducted using the PSI-BLAST program with either a single sequence or an alignment used as the query, with a default profile inclusion expectation (E) value threshold of 0.01 (unless specified otherwise), and was iterated until convergence [11, 13]. For all searches with compositionally biased proteins we used a statistical correction for this bias to reduce false positives in these searches. Multiple alignments were constructed using the T_Coffee [20] or PCMA [50] programs, followed by manual correction based on the PSI-BLAST results. All large-scale sequence analysis procedures were carried out using the SEALS package [51].

Structural manipulations were carried out using the Swiss-PDB viewer program [52] and the ribbon diagrams were constructed with MOLSCRIPT [53]. Searches of the PDB database with query structures was conducted using the DALI program [54]. Protein secondary structure was predicted using a multiple alignment as the input for the PHD program [21]. Similarity-based clustering of proteins was carried out using the BLASTCLUST program [55]. Phylogenetic analysis was carried out using the maximum-likelihood, neighbor-joining and least squares methods [56, 57]. Briefly, this process involved the construction of a least squares tree using the FITCH program [58] or a neighbor joining tree using the NEIGHBOR [57] or the MEGA program [59], followed by local rearrangement using the ProtML program of the Molphy package [57] to arrive at the maximum likelihood (ML) tree. The statistical significance of various nodes of this ML tree was assessed using the relative estimate of logarithmic likelihood bootstrap (ProtML RELL-BP), with 10,000 replicates. Gene neighborhoods were determined by searching the NCBI PTT tables with a script that was custom-written by the authors. Briefly the procedure involved collecting fixed neighborhoods centered on a set of query genes, followed by the clustering of their products using the BLASTCLUST program to determine related products. The presence of clusters of related genes amongst the neighbors of the query set implied the presence of conserved gene neighborhoods. This was used in combination with a previously reported screen for conserved gene neighborhoods [15, 35]. These tables can be accessed from the genomes division of the Genbank database [60].

Human–forest relationships: ancient values in modern perspectives

The relationship between human beings and forests has been important for the development of society. It is based on various productive, ecological, social and cultural functions of forests. The cultural functions, including the spiritual and symbolic role of forests, are often not addressed with the same attention as the other functions. The aim of this paper is to put a stronger emphasis on the fact that the acknowledgement of cultural bonds is needed in the discussion of sustainable development. Forest should not only be considered as a technical means to solve environmental and economic problems. To achieve a deeper understanding of the dependency of society on forests, it is necessary to recognise the role of forests in our consciousness of being human. Giving a historical overview about the cultural bonds between people and forests, the first part of the paper puts focus on non-productive aspects in human–forest relationships. Through history, forest values have changed and new functions have emerged. Industrialisation and urbanisation have contributed to an alienation from nature and weakened the connection of humans to forests. The consequences of these changes for the development of society and its environment are discussed in the second part of the paper. Finally, it is elaborated how the awareness of the cultural bonds can be strengthened in the population and especially in forest management a management which should relate to cultural, emotional and aesthetical aspects, in addition to economic, ecological and social functions, and lead towards a sustainable relationship between forests and society.

This is a preview of subscription content, access via your institution.

Two Sciences of Mind

In 1979, two cognitive scientists, Francisco Varela and Eleanor Rosch, and a computer scientist named Newcomb Greenleaf — all freshly minted Buddhists — organized what was to be a groundbreaking conference at The Naropa Institute in Boulder, Colorado. Recently established by Tibetan meditation master Chögyam Trungpa Rinpoche, the institute was designed to be a place where meditation traditions and western scholarship would meet on common ground.

The conference, entitled “Comparative Approaches to Cognition: Western and Buddhist,” would be an exciting convergence of East and West. While some participants remember it as stimulating in new and different ways, Rosch describes it as combative, an intellectual melee just short of chair-throwing. As she tells it, “We thought naively that the things we were discovering about mind through Buddhism were so meaningful and right-on that our colleagues would immediately want to sit down and discuss how this deep understanding of the mind fit into the various sciences. Wonderful things would happen. Instead, they looked at the thick reader we compiled, largely from Buddhist sources, and said, ‘What is this?’ When Francisco and the rest of us gave talks, they would say, ‘Huh?’ When the meditation sessions on the schedule failed to immediately provide the ‘information’ that they needed to ‘understand’ what we’d been saying, they reacted, ‘We’re at a conference and you’re asking us to sit here and do nothing?’ When it came time to discuss, they simply revolted. Clearly, we hadn’t gone where they were.” The Buddhism-science dialogue was off to a difficult start.

Francico Varela, the conference’s leading light, was a walking Buddhism-science dialogue. As an undergraduate student in biology in his native Chile in the early sixties, he had burst into the office of professor Humberto Maturana and blurted out that he wanted to study “the role of mind in the universe.” Maturana, always a free-thinker, replied, “My boy, you’ve come to the right place.” The professor became his mentor and allowed him to explore notions about mind and body incorporating ideas from French phenomenology. Varela went on to Harvard and proved he had no fear of detail by earning his Ph.D. for a study of information processing in insect retinas. He was sure his career would take off in Salvador Allende’s new Chile, but not long after he returned home, the political tides turned, and he had to flee Colonel Augusto Pinochet’s military regime with only $100 in his pocket.

Varela ended up back in the United States, and in 1974, at a point when he felt cast adrift, he encountered an old friend he had met while living in Boston, Jeremy Hayward, a physicist who was a student of Trungpa Rinpoche. Hayward arranged for Varela and Trungpa to meet, and when Varela let on that he was struggling to find what exactly to do, Trungpa Rinpoche offered to teach him how to “do nothing,” quite a feat for someone with a mind as active as Varela’s.

He took to meditation with a vengeance. He saw it as the means for inquiring into his favorite subject, “mind in the universe.” While behaviorism had long since thrown out subjective investigation as so much twaddle, Varela was determined, according to Eleanor Rosch, “to reinstate first-person experience as a source of scientific knowledge, and open scientific inquiry to methods such as meditation.”

When Rosch met Varela in the late seventies at one of Trungpa’s programs, she had just started practicing Buddhism. She had made some pioneering discoveries in the emerging field of cognitive psychology and, like Varela, she saw meditation as the ultimate research tool, the one she had been looking for all her life. The Naropa meeting whetted their appetites, but it left them wanting something more – and better.


Get even more Buddhist wisdom delivered straight to your inbox! Sign up for Lion’s Roar free email newsletters.

His Holiness the Dalai Lama and participants discuss neuroplasticity at the twelfth Mind and Life conference in Dharamsala, 2004. Photo (c) 2005 The Mind and Life Institute.

Meanwhile, the man whose name is now listed as Tenzin Gyatso at the top of the roster in every Mind and Life meeting was quietly having discussions with scientists every chance he got. His Holiness the XIVth Dalai Lama grew up in a place of extremely advanced learning that was nevertheless unblessed by the hand of Western science and technology. Yet every book, every vehicle, every machine, every device that came to him from the West while he was growing up became an object of intense curiosity, something to tear apart and put back together. The world of mechanisms was meeting the world of meditation.

When the Dalai Lama fled Tibet in 1959 at age twenty-four, he quickly saw how much the Western scientific ethos dominated affairs in the larger world. He had some catching up to do. He was determined to learn more and test what he knew, having just passed the difficult examinations for the Geshe Lharampa degree, the equivalent of a doctorate in Buddhist philosophy.

Before the Dalai Lama became a celebrity and a Nobel Prize winner, he was a humble monk leading a country that didn’t have a seat at the United Nations. People didn’t defer to him the way they do now. Nevertheless, he was able to develop friendships with a number of prominent European scientists, who were quite kind to him and genuinely enjoyed his company as an interlocutor. One of the first was Carl von Weizsäcker, brother of the one-time president of West Germany and assistant to the quantum physics luminary Werner Heisenberg. For days at a time, von Weizsäcker would sit with the Dalai Lama tutoring him on quantum physics and its philosophical implications. His Holiness also had the good fortune to befriend the physicist David Bohm, who had spent a great deal of time with Krishnamurti. His Holiness carried on a decades-long conversation with Bohm that, in his words, “fueled my thinking about the ways Buddhist methods of inquiry may relate to those used in modern science.” He also developed a close relationship with Sir Karl Popper, the most prominent philosopher of science. He learned from Popper’s teachings how the logic of science relied on abstraction, usually in mathematical form, and instrumentation (microscopes, telescopes, etc.). By contrast, the logic of Buddhism relied on natural language and examples drawn from unmediated personal experience.

Not all of the Dalai Lama’s interactions with science were so positive. In 1979, while Varela was wrestling with the crowd at Naropa, the Dalai Lama faced a hostile clutch of scientists at a conference in Russia, where one of them felt he was postulating the existence of a soul. If this dialogue was going to get off the ground, someone clearly had to draw up better terms of engagement.

For his part, Varela was determined not to repeat what had occurred at the Naropa meeting, so he set down some guidelines for any future meeting about Buddhism and science: participants must not only be knowledgeable, they must have something to contribute and be open to dialogue. It would be a few more years, but he would get the chance to organize the kind of meeting he envisioned, and the Dalai Lama would be the one to make the difference. In 1983, now back in Chile, Varela traveled to a conference on science and spirituality in Austria, where he ended up sitting next to the Dalai Lama, who peppered him with questions about the brain. They were kindred spirits – a meditator who had come to science and a scientist who had come to meditation. They vowed to talk again.

In 1985, Varela heard from his friend Joan Halifax of a plan hatched by businessman and Buddhist Adam Engle to hold a dialogue between the Dalai Lama and scientists about the shared ground between Buddhism and modern physics. Varela persuaded Engle that brain science would be a better place to start and they formed a partnership that led to the first Mind and Life meeting, “Dialogues between Buddhism and Cognitive Science,” held in Dharamsala, India, in October 1987.

Varela was the scientific coordinator for the meeting, and he developed a template that called for a small, committed group of participants, each of whom would make a presentation on a different aspect of a topic area. Discussion would be facilitated by the coordinator and the Dalai Lama would be an active participant throughout. This has been the format, with minor variation, for all twelve of the Mind and Life dialogues that have been held to date.

Richard Davidson, far right, demonstrates a PET scanner during the Dalai Lama’s tour of the Keck Laboratory at the University of Wisconsin at Madison. Photo by Jeff Miller.

Several years after the first Mind and Life meeting, Varela found himself tromping around the mountains and caves above Dharamsala. He was there in an effort sanctioned by the Dalai Lama to use sophisticated instruments to measure what was going on when yogis meditated. His partner in that effort, Richard Davidson, was – and is – a leading authority on the relationship between brain and emotion and a pioneer in developing and applying techniques for measuring brain activity. He holds several academic chairs in psychology and psychiatry and is the director of the Laboratory for Affective Neuroscience and the W.M. Keck Laboratory for Functional Brain Imaging and Behavior at the University of Wisconsin at Madison.

Richie Davidson, as he likes to be known, has long been interested in trying to demonstrate scientifically what meditation might do. In the early seventies, he was a Harvard colleague of Daniel Goleman, who would go on to become a champion of the principle of “emotional intelligence” and write a best-seller by that name. In their Harvard days, Davidson and Goleman co-authored a paper that argued that training attention through meditation would create “lasting and beneficial psychobiological changes.” While a layperson can rely on anecdotes and personal reports to determine whether or not there are “beneficial changes,” a scientist needs hard data.

Fortunately, as Davidson’s career progressed, so did the science on brain function. The Society of Neuroscience, only established in 1970, would go on to become the largest and fastest-growing society in all of experimental biology. By the late eighties, neuroscientists were taking very detailed pictures of brain activity, and by the late nineties they were taking videos. Because of such advances in brain-imaging technology, researchers could now gather hard data about the beneficial effects of meditation. Talking about such data was one of the primary focuses of the 2000 Mind and Life conference, coordinated by Goleman, with Davidson, Varela, Paul Ekman, another prominent emotion researcher, and others in attendance. The results of that meeting, and a follow-up session the next year at Davidson’s lab, are the subject of Goleman’s book, Destructive Emotions.

Researchers in Davidson’s lab have been able to chart brain activity in meditators in a way that has never been done before, primarily by using a functional MRI, which videotapes brain function (unlike the standard MRI, which only takes snapshots). They combine this information with data from an electroencephalograph (EEG), which measures electrical activity at the surface of the brain. While the EEG technician at your local hospital might attach several dozen sensors to a patient’s head, in Davidson’s lab they use up to 256. The raw EEG data is enhanced by software that triangulates from the sensors and reports on activity not only on the surface but deep within the brain. Davidson told me recently that his goal is to “establish through scientific research the validity of methods that have been developed in Buddhism for 2,500 years.” Through objective verification of their benefits, Davidson believes, “these practices could gain wider acceptance both in the mainstream culture and the medical community.”

Davidson’s team and his collaborators have done two types of studies, one with people first learning to meditate and another with extremely experienced and adept practitioners. In the first kind of study, they are trying to find out what benefits accrue for someone whose meditation is regular but of limited duration. Jon Kabat-Zinn has done extensive research into the health benefits of mindfulness meditation and has long been involved with Mind and Life, so Davidson collaborated with him on a recent study of workers in a high-tech company who took a two-month training program in meditation. It showed significant changes in brain activity, declines in anxiety, and beneficial changes in immune function.

The study of what Davidson calls “the Olympic athletes of meditation,” those who have done from 10,000 to 55,000 hours of practice, is intended to show “what the limits of human plasticity are.” When Davidson began his career, he couldn’t get much traction because the brain was treated as a computer by the reigning behaviorist view. The brain is now known to grow and change based on how it is used. So Davidson asks, “What does very intensive training do to the mind? We’ve come to appreciate the value of physical training, but we have not given the same kind of attention to the mind. In our work, we now view happiness and compassion as skills that can be trained. When we look at advanced practitioners, we are stretching how people think about the furthest reaches of human development.”

Among other findings, Davidson’s work has shown that meditators can regulate their cerebral activity, yielding more focus and composure. By contrast, most untrained subjects asked to focus on an object cannot limit their mental activity to a single task. The monks who had practiced the longest showed the greatest brain changes, leading Davidson to think that they may have effected permanent changes. His most intriguing results have come from observing advanced practitioners meditating on compassion. The brain changes observed during this practice seem to show that intensively generating goodwill produces indicators of an extreme state of well-being. While the sources of all kinds of disorders and dysfunctions have been studied extensively, there is almost no literature on what these scientists sometimes call “healing emotions.”

Paul Ekman, unlike Davidson and Kabat-Zinn, has had no long-term interest in meditators or meditation. Ekman, who recently retired as the head of the Human Interaction Laboratory at the University of California at San Francisco, studied the emotions for fifty years. He more or less stumbled into his recent involvement in studying meditators. “It all started with the meeting in Dharamsala,” he recently told me. “I only went to the meeting because my daughter had lived in a Tibetan refugee camp in Nepal and was very moved by the cause. I thought it would be a great treat for her to meet the Dalai Lama. Now, having met the Dalai Lama myself, I’ve developed an interest in what he’s doing, for what I can learn both as a person and as a scientist.

“When I completed my training 45 years ago, my supervisor said, ‘If you can increase the gap between impulse and action, you will benefit your patient.’ He didn’t know that’s a straight Buddhist view: the spark before the flame. That may be a place where through practices of one kind or another, it may be possible to do what nature did not intend for you to do, to become a spectator of yourself and decide whether you want to go along with it, and if so in what fashion.” Some think this convergence of neuroscientific thinking and Buddhist teachings is extraordinary. In the abhidharma (sometimes called “Buddhist psychology”) one is said to solidify experience through a chain of twelve mental events known as the nidanas. Some masters teach that the chain can be broken at the moment between “craving” and “attachment,” and unconditioned, open experience can occur in that gap.

Ekman says that “Increasing the gap between impulse and action is very unusual emotional behavior, but based on the studies I’ve done with a few monks, I believe that is something they can achieve. What these extraordinary people can do shows us the outer limit of what humans are capable of.”

As a result of the 2000 Mind and Life meeting and at the behest of the Dalai Lama, Ekman agreed to launch the Extraordinary Persons Project. His main subject in a precursor to this project was someone who has also been studied extensively in Davidson’s lab, the monk and fellow Mind and Life interlocutor Matthieu Ricard. A long-time meditator, Ricard served for twelve years as aide and translator for the great Dzogchen master Dilgo Khyentse Rinpoche.

Ekman’s specialty, developed over years of painstaking study of minute movements of the face, is the Facial Action Coding System, a method of cataloging emotions based on minute changes in facial muscles, such as raising the inner eyebrows, tightening the eyelids, or lowering the corners of the mouth. How well someone can detect such microexpressions is regarded as an indication of empathy, as well as a skill that enables one to uncover deception and ill-intent. Consequently, Ekman has been vigorously sought out to help law enforcement and anti-terror agencies.

Ekman was curious to see whether meditators, who might be expected to be more attentive and conscientious, would do well at detecting lightning-fast changes in facial expressions. When presented with a videotape showing a fleeting series of facial expressions that one must correlate with an emotion, Ricard and another meditator scored higher than any of the five thousand other people tested. As reported in Destructive Emotions, Ekman said, “They do better than policemen, lawyers, psychiatrists, customs officials, judges – even Secret Service Agents,” the group that had previously held top honors.

Ekman also decided to test whether Ricard could alter the startle reflex, the physiological response to a sudden loud noise. Following standard procedure, the researchers told the subject that they would count down from ten to one, at which point a loud noise would go off, the equivalent of a pistol fired near one’s ear. “I documented that Matthieu was able to focus his attention using a meditative practice so as to minimize any sign he had been startled,” Ekman says. He told the Dalai Lama, “I thought it was an enormous long shot that anyone could choose to prevent this very primitive, very fast reflex.”

What Ekman and Davidson have discovered in their research has nothing to do with holding a Buddhist worldview. For his part, Davidson says, “I am a hard-nosed Western neuroscientist. The level of description of mind and the level of description of brain are very different, but I also believe that mind depends on brain and without brain there is no mind.” While in Buddhism, mind transcends embodiment, as evidenced by reincarnation, in neuroscience mind or consciousness is considered an “emergent property” it just pops up where there are brains.

In Buddhism, emotions such as the “three poisons” – aggression, clinging, and delusion – are generally talked about as something to counteract or transcend. Ekman talks about emotions in Darwinian terms, as adaptations to the environment. They allow us to operate automatically, pre-thought. Ekman says, for example, that what he would call “fear” is required to be able to maintain the state necessary to react when driving at high speeds on a freeway. You could spend a long time talking about whether fear is good or not, but Ekman feels “it is not very helpful to just use words, because we may be using them in very different ways. We need to rely on examples. That’s what I try to do in the dialogues.”

Ekman and Davidson and the Buddhists they’ve been talking to seem not at all focused on who’s right and who’s wrong. The methodologies of science and Buddhism are mutually respected. For example, the fact that the notion of “mood” appears to have no formal place in Buddhist teachings and yet is a widely used notion by laypeople, clinicians, and researchers in the West is leading both Buddhist teachers and scientists to think about how they study and teach. In today’s Buddhism and science dialogue, insights are not so bound up with authorship. Who discussed the virtue of having “elasticity of mind”? The Buddha? No. Charles Darwin, in The Expression of Emotion in Man and Animals.

Immunoflorescent light micrograph of brain cells from the cortex of a mammalian brain. The star-shaped cells, astocytes, provide support and nutrition to nerve cells, and may also play a role in information storage.

Matthieu Ricard started his professional life as a molecular biologist. Now, after many decades as a monk, his molecules have become the subject of study for biologists. His discussions with physicist Trinh Xuan Thuan, published as The Quantum and the Lotus, and his vigorous give-and-take with his father, the renowned philosopher Jean-François Revel, published as The Monk and the Philosopher (a best-seller in France) demonstrate his ability to discuss Buddhist understanding deftly outside the context of Buddhism. This has made him an ideal participant in Mind and Life dialogues and a laboratory subject who can report his subjective experience with scalpel-like precision.

Ricard is concerned that the average person is afraid of the mind, and that this fear is taking a great social toll. If you ask someone to look into their mind, he says, “A surprisingly common reaction is ‘I don’t want to look into my mind. I’m afraid of what I’m going to find there.'” He feels that many people may find the notion of meditation and working with the mind more attractive if they can see that “we vastly underestimate the magnitude of change that is possible. If studies can provide robust evidence for the effect of mind training, that will be of great value to society.”

When his father the philosopher challenges superstition in Buddhism, Ricard makes a strong case that contrary to popular belief, Buddhists do rely on verifiability. Buddhists are asked to examine what they have been taught and they commonly trust what they are told by a teacher by “evaluating all sides of their character.” He says that faith has a place in life, but not blind faith. The average person is constantly holding beliefs because “they accept the competence of those who provide the information.” He believes that “many people need to hear information about meditation from people they deem competent.” His book The Case for Happiness, now being translated into English, is part of that campaign. “I am willing,” he says, “to take a few trips a year to the States from Nepal to spend a few weeks on this research. It is time well spent, if I can serve as a bridge between worlds. The culture is training people’s minds in one direction right now. They need to see that another direction is possible.”

In his concluding statement in The Quantum and the Lotus, Ricard says that one of the main reasons that “science has been led into a dialogue with Buddhism” is the dilemma that has emerged through quantum mechanics and relativity of “trying to reconcile the apparent reality of the macrocosm with the disappearance of solid reality as soon as we enter the world of particles.” Arthur Zajonc (rhymes with science), editor of The New Physics and Cosmology: Dialogues with the Dalai Lama, is a physicist who has peered, at times side by side with the Dalai Lama, into the topsy-turvy world that lies far beneath the naked eye.

Zajonc notes that there is a kind of natural kinship between Buddhism and neuroscience, since Buddhism has had so much to say about the mind and can provide reliable evidence of effectiveness. “When you switch over to the physical sciences,” he says, “you are in a very different territory.” Buddhism could be said to offer a science of the mind, but there is nothing in Buddhism that looks much like the highly mathematical world of Western physics. If physics were limited to predicting what happens when a hammer hits a nail, there wouldn’t be much to talk about, but because physicists since Einstein have strayed into looking into the nature of reality, it engages the philosophical side of Buddhism and doctrines like emptiness of inherent nature and codependent origination. The philosophical convergences lead to seminars like “Quantum Nonlocality & Emptiness in Madhyamika Prasangika,” recently presented by physics professor Vic Mansfield as part of the Namgyal Buddhism and Science Dialogue.

Referring to a meeting in 1998 with the Dalai Lama at the Institute for Experimental Physics at the University of Innsbruck in Austria, Zajonc says, “We really worked on the nature of reality, why things look the way they look, when deep down they are actually quite different.” This question relates to two “problems” in modern physics. The first is the nonlocality problem: in the so-called “macro” world, we think of objects as discrete and unconnected, but at the quantum level, there really are no objects everything is intimately connected with everything else. The second is the measurement problem: at the quantum level, the data that comes back to you is completely different depending on the question. It’s as if you were to ask a person, “Are you a boy?” and they say yes, but when you ask, “Are you a girl?” they also say yes. This kind of breakdown in logic has caused physicists to regard the quantum arena as random, despite Einstein’s retort that “God does not play dice with the universe.”

Buddhists and cosmologists can also get into a tangle on the beginning of the universe, since Buddhists are into beginninglessness. At the same time, notions of time seem to provide a point of convergence, since many Buddhist teachings upset conventional notions of time in the same way as the principle of relativity does.

Just like the mind scientists, Zajonc does not seem motivated by figuring out who’s right. In his book, he describes many points in dialogues when everyone breaks into peals of spontaneous laughter. “It’s amazing,” Zajonc says. “It literally breaks you up. It breaks up your ideas and leaves a kind of humor. Nonlocality, randomness, interdependence – these are like quantum koans. If you try to think them out in a conventional way, you will fail. Sometimes I think one needs a new level of insight to be able to put your mind around them. Furthermore, our technological advancement far outstrips our ethical development, our capacity to make sound judgments about what we’ve unleashed.”

On this point, Zajonc is passionate. “At the beginning of our scientific revolution,” he says, “there was division of labor. Science would take care of natural knowledge. All ethical considerations would be given to the church. The yogi knows that this is not actually possible. Knowledge brings power. As a scientist, you have the power, but you should also know the value of interconnectedness. When the genetic researcher Eric Lander was in dialogue with the Dalai Lama, he was struck when the Dalai Lama asked about the intention behind it all. Science has an ethic of leaving intention out of the picture, but with the nuclear problem and biotechnology, we find ourselves with moral dilemmas that our Enlightenment worldview is not fully able to handle.

“Our knowledge cannot be so object-oriented. In contemplative traditions like Buddhism, knowledge is insight-oriented. You don’t ingest units of knowledge you transform how you see reality. If we educated people in a way that transformed their experience rather than just filled them with information, it would be an enormous help, but we tend not to. We have examples in the West, such as Goethe’s attempt to develop a contemplative science of insight, but the Buddhists have been doing it that way for a couple thousand years.”

Eleanor Rosch is skeptical about the Buddhism and science dialogue. She thinks it may be heading in an unhelpful direction. “For many it’s not a dialogue,” she says. “There’s a frenzy about this kind of thing. I get frequent e-mails from people who want to study meditation from the scientific point of view so they can ‘get rid of all that mystical Eastern stuff and find out what’s really going on’ – by which they mean neurons firing in the brain and similar functions. Then there are Buddhists who want to ‘prove’ that meditation ‘works.’ Often research shows more about the preconceptions of the researchers and audience than it does about the mind. For example, what metaphysical beliefs might you harbor that would make you wildly excited to learn that when people pay attention in meditation, they show the same pattern of brain activity as when they pay attention anywhere else? Rather than scientists and Buddhists stretching their minds together, I see Buddhism frequently colonized as a feel-good, flat-abs caricature of itself no different from any other materialist reductionist doctrine.

“I respect the Dalai Lama’s desire to establish a universal ethic of compassion by means of science,” she continues, “but given the present world dynamic, is allying Buddhism with the extremes of secular rationalism the way to do that? People ignore good science all the time. Buddhism might offer something unique to religious polarization: a middle way of spirituality beyond ego. It can stimulate religions to excavate the contemplative and meditative paths in their own heritages, such as the Jewish meditation movement and Christian centering prayer. What people really need is to find deeper contemplative experience before their competing thought systems lead them into a massive conflagration.”

Matthieu Ricard would respectfully disagree with Rosch about the value of the dialogue and the direction of the research. “I don’t see that what we are doing affects Buddhism negatively. We are not making Buddhism-lite. I am very disturbed when that happens. Buddhism remains Buddhism. We are simply offering food. To offer someone food that we know how to produce and that they need now, we don’t have to turn them into horticultural specialists.” As far as reductionism goes, Ricard contends that “No one doing sound science could gain any support for the reductionist viewpoint from what we’re doing. You can never answer the question of who decided to meditate on compassion in the first place. That is beyond the scope of scientific research.”

Questions about fortifying materialistic thinking and the possibility of co-opting Buddhism will undoubtedly remain. Some will question whether this dialogue helps in the development of a genuine contemplative tradition, as Arthur Zajonc seems to believe, or may lead us away from it, as Eleanor Rosch suggests. But the research will go on. Grant applications for research at prominent institutions like the National Institutes of Health, MIT, and Princeton that contain “mindfulness” or “meditation” are no longer scoffed at, and research centers focusing on meditation are likely to spring up.

Alan Wallace, a former Buddhist monk who studied with Arthur Zajonc, has been an interpreter and an active participant in every Mind and Life conference. He is also the author of the anthology Buddhism and Science and recently founded the Santa Barbara Institute for Consciousness Studies. The institute has picked up on work that Paul Ekman was doing at the behest of the Dalai Lama on “cultivating emotional balance,” and is training schoolteachers, nurses, and other health professionals in “secularized meditation” techniques and other forms of working with emotions. The Mindful Attention Program will study whether meditation can aid people with attention deficiency hyperactivity disorder. Finally, the Shamatha Project will observe people meditating in a special facility over the course of a year.

Paul Ekman, who is on the board of the Santa Barbara Institute, is excited about this study, but he says, it’s the “second-best study.” The best study, he says, would be “to do something like what was done in the famous studies of cardiac disease, where they started with 4,000 people. If we started to look at 4,000 teenagers in the Bay Area, and studied them every few years, inevitably some of them would get involved with meditation. We would have known what they were like and who it was who got involved. Then, we would follow them for the next twenty-five years. That is the research that needs to be done.

“The only problem with the Santa Barbara Institute study is that people willing to spend a year in a meditative retreat are not Buddhist virgins. We’ll see what changes over time, and we’ll see what their nervous systems and their emotional lives are like at the start and how they change. We’ll learn a lot from it. But I would have liked to have seen them twenty years earlier, to find out what they were like before they got involved, and what it was that got them involved. Someday, someone will do that. It takes dedicating a lifetime to it. My mentor did a forty-year study of hypertension. It takes a career to do it. People who have been influenced by Buddhism, I would think, would be more willing than others to dedicate the time, since they are less preoccupied with their own cravings for glory and recognition.”

On March 24, 2000, Francisco Varela took the floor in the Dalai Lama’s meeting hall to give the last of his many presentations in the dialogue between Buddhists and scientists that he had done so much to get started. On the verge of tears, in his gestures and soft words he implicitly thanked the Dalai Lama for making it possible for him to be there. Several years earlier, when he was dying of cancer, he had been ambivalent about receiving a liver transplant. Suddenly he received a fax from the Dalai Lama encouraging him to prolong his life. Now, although frail, he was back in action, flashing a PowerPoint slide onto the screen. He made the case for Buddhists and neuroscientists to collaborate for the good of the human race, a case he had been making for more than twenty years, since a time when there were few people actually called neuroscientists, a time when people were laughing him out of the building. By the time the proceedings were published, he would be dead, but the movement he helped to start flourishes.


At Lion’s Roar, our mission is to communicate Buddhist wisdom in today’s world. The connections we share with you — our readers — are what drive us to fulfill this mission.

Today, we’re asking you to make a further connection with Lion’s Roar. Can you help us with a donation today?

As an independent nonprofit committed to sharing Buddhist wisdom in all its diversity and breadth, Lion’s Roar depends on the support of readers like you. If you have felt the benefit of Buddhist practice and wisdom in your own life, please support our work so that many others can benefit, too.

Please donate today — your support makes all the difference.

Lion’s Roar is a registered charity in the US and Canada. All US & Canadian donations are tax deductible to the full extent allowed by law.

About Barry Boyce

Barry Boyce is a professional writer and editor who was longtime senior editor at the Shambhala Sun. He is editor of, and a contributor in, The Mindfulness Revolution: Leading Psychologists, Scientists, Artists, and Meditation Teachers on the Power of Mindfulness in Daily Life (2011). He is also the co-author of The Rules of Victory: How to Transform Chaos and Conflict—Strategies from the Art of War (2008).

Related Posts.

Watch the video: #NISData Training: Introduction to R, with Jonathan Stoneman (September 2022).


  1. Digar

    Brilliant sentence and on time

Write a message