quarta-feira, 31 de dezembro de 2008
Lazar defines language as a system composed of four major characteristics: semantics, syntax, phonetics, and pragmatics. Semantics relates to the meaning of words and sentences; syntax refers to the structural aspects of language such as grammar; phonetics describes the sounds of language; and pragmatics involves the communicative value of language. He then goes on to show us what parts of the brain control these functions.
The Undesigned Universe
Part 2: “Designing a Habitable Solar System.” This lecture will discuss the notion of a “habitable zone” around any sun, what a star system optimally designed for life would look like, and finally how our solar system measures up.
terça-feira, 30 de dezembro de 2008
Alexander Nehamas, Edmund N. Carpenter II Class of 1943 Professor in the Humanities, Professor of Philosophy and Comparative Literature Princeton University
“I am an aesthete; that is the one ‘sin’ I confess to. If I do have a public message, it is that aesthetic facts—beauty, style and elegance, grace and connectedness—are crucial to life.”
— Alexander Nehamas in an interview with David Carrier in Bomb Magazine, 1998
Alexander Nehamas is an internationally known philosopher whose broad range of scholarly interests include classical Greek philosophy, aesthetics, and literary theory. Recently he has addressed the question of why beauty has been discredited as a philosophical notion and has championed aesthetic values. He is author of Virtues of Authenticity: Essays on Plato and Socrates (1999) and Nietzsche: Life as Literature (1985), which is considered a classic, as well as translator of Plato’s Symposium (1989) and Phaedrus (1995). Nehamas is particularly interested in Nietzsche’s integration of life and philosophy in the creation of self, which he calls the “art of living.” He links this philosophical practice to a model that comes from classical Greece, and in his book The Art of Living: Socratic Reflections from Plato to Foucault (1998) he examines the influence of this Socratic tradition on later philosophers, including Montaigne, Nietzsche, and Foucault.
UW Simpson Center : Solomon Katz Lectures
segunda-feira, 29 de dezembro de 2008
Ralph Nader is a consumer advocate, lawyer, and author.
He was born in Winsted, Connecticut on February 27, 1934. In 1955 Ralph Nader received an AB magna cum laude from Princeton University, and in 1958 he received a LLB with distinction from Harvard University.
His career began as a lawyer in Hartford, Connecticut in 1959 and from 1961-63 he lectured on history and government at the University of Hartford.
In 1965-66 he received the Nieman Fellows award and was named one of ten Outstanding Young Men of Year by the U.S. Junior Chamber of Commerce in 1967. Between 1967-68 he returned to Princeton as a lecturer, and he continues to speak at colleges and universities across the United States.
In his career as consumer advocate he founded many organizations including the Center for Study of Responsive Law, the Public Interest Research Group (PIRG), the Center for Auto Safety, Public Citizen, Clean Water Action Project, the Disability Rights Center, the Pension Rights Center, the Project for Corporate Responsibility and The Multinational Monitor(a monthly magazine).
The Nader Page
Lectures by Ralph Nader . listeningtowords
Princeton University: WebMedia - Lectures
domingo, 28 de dezembro de 2008
Stephen Wolfram is a scientist, author, and business leader. He is the creator of Mathematica, the author of A New Kind of Science, and the founder and CEO of Wolfram Research. His career has been characterized by a sequence of original and significant achievements.
Born in London in 1959, Wolfram was educated at Eton, Oxford, and Caltech. He published his first scientific paper at the age of 15, and had received his Ph.D. in theoretical physics from Caltech by the age of 20. Wolfram's early scientific work was mainly in high-energy physics, quantum field theory, and cosmology, and included several now-classic results. Having started to use computers in 1973, Wolfram rapidly became a leader in the emerging field of scientific computing, and in 1979 he began the construction of SMP--the first modern computer algebra system--which he released commercially in 1981.
In recognition of his early work in physics and computing, Wolfram became in 1981 the youngest recipient of a MacArthur Prize Fellowship. Late in 1981 Wolfram then set out on an ambitious new direction in science aimed at understanding the origins of complexity in nature. Wolfram's first key idea was to use computer experiments to study the behavior of simple computer programs known as cellular automata. And starting in 1982 this allowed him to make a series of startling discoveries about the origins of complexity. The papers Wolfram published quickly had a major impact, and laid the groundwork for the emerging field that Wolfram called "complex systems research."
Through the mid-1980s, Wolfram continued his work on complexity, discovering a number of fundamental connections between computation and nature, and inventing such concepts as computational irreducibility. Wolfram's work led to a wide range of applications--and provided the main scientific foundations for such initiatives as complexity theory and artificial life. Wolfram himself used his ideas to develop a new randomness generation system and a new approach to computational fluid dynamics--both of which are now in widespread use.
Stephen Wolfram: A New Kind of Science
Paul Ekman, Ph.D is Professor of Psychology at the University of California at San Francisco . Ekman is a world-renowned expert in emotional research and nonverbal communication, particularly for his studies on emotional expression and the corresponding physiological activity of the face. His research has been supported by the National Institute of Mental Health for 46 years. He has also received support from the National Science Foundation, and he has organized an NSF workshop, "Understanding the Face." Among his distinguished lectures is a 1992 keynote address to the Japanese Congress of Psychology. Ekman is responsible for editing the new edition of Charles Darwin's The Expression of the Emotions in Man and Animals ( Oxford 1998), to which he also contributed an important introduction and after word. The Nature of Emotion: Fundamental Questions (with R. Davidson, Oxford 1994) and What the Face Reveals: Basic and Applied Studies of Spontaneous Expression Using the Facial Action Coding System (with E. L. Rosenberg, Oxford 1998). His latest book is Emotions Revealed (Times Books, April, 2003).
Becoming Human: Brain, Mind and Emergence (Stanford University Event 2003)
Conversations host Harry Kreisler welcomes Harvard Law Professor Elizabeth Warren for a discussion of the economic pressures confronting the two income middle class family as it struggles to pay mortgages, health care, and education costs. Professor Warren offers surprising answers to "Who goes bankrupt and why?" and explores the role of banks and credit card companies in tightening the squeeze on the average American family. The interface between politics and the law in addressing these problems is explored.
UCTV Programs with Elizabeth Warren
Conversation with Elizabeth Warren, cover page
Reflections and Further Comments: Play
Antonio R. Damasio, M.D. Ph.D is M.W. Van Allen Professor and Head of Neurology, University of Iowa . He completed his medical degree and doctorate at the University of Lisbon School in Portugal and was a research fellow at the Aphasia Research Center in Boston . He has won many honors and awards, including the Arnold Pfeffer Prize (2002), the Reenpaa Prize in Neuroscience (2000), and was elected to the American Academy of Arts and Sciences (1997) and the Neurosciences Research Program (1997). His research interests include neurobiology of the mind, specifically the understanding of the neural systems that sub serve memory, language, emotion, and decision-making; his clinical interests focus on disorders of behavior and cognition, and movement disorders.
Becoming Human: Brain, Mind and Emergence (Stanford University 2003 Event)
Selected Bibliography MachinesLikeUs Bookstore : Cognition Page 1
Descarts' Error: Emotion, Reason, and the Human Brain; Looking for Spinoza: Joy, Sorrow, and the Feeling Brain; The Feeling of What Happens: Body and Emotion in the Making of Consciousness
Antonio Damasio USC Neuroscience
USC College of Letters, Arts, & Sciences : Brain and Creativity
sábado, 27 de dezembro de 2008
Author: Linda C. Mayes, M.D., Yale University
Adolescents, particularly those from high risk environments, are especially likely to engage in risky behaviors including drug use and abuse. Emotional regulation, stress responsiveness, reward sensitivity, impulse regulation, and decision-making are hypothesized to be involved in adolescent engagement in risky behaviors. Each of these capacities reflects the emerging maturation of subcortical to cortical neural circuitry involved in stress-reward systems and in the development of capacities for behavioral inhibition. During adolescence, the prefrontal cortex is relatively immature and undergoes refinement of neuronal connections. At the same time, dopaminergically regulated subcortical brain systems responsive to stress are more active in adolescence than at any other point in adulthood. Hence, in adolescence, increased dopamine input at times of increased stress may overly impair decision making and judgment in immature prefrontal circuits. This emerging balance between “emotional regulation” and “inhibition-rational decision” systems is delayed by factors such as early childhood stress or prenatal drug exposure. Thus, prenatally drug-exposed adolescents growing up in chronic adversity may be especially vulnerable to early addiction because of poor emotional regulatory mechanisms, which make them more sensitive to stress and have an attendant negative impact on inhibition, impulse control, and decision making. Understanding mechanisms for initiating drug use during a critical developmental period will allow more effective and targeted interventions for adolescents at risk for addiction.
This lecture is an installment of the Behavioral and Social Sciences Research Lecture Series sponsored by the NIH Office of Behavioral and Social Sciences Research and organized by the NIH Behavioral and Social Sciences Research Coordinating Committee. The Behavioral and Social Sciences Research Coordinating Committee (BSSR CC), with support from the Office of Behavioral and Social Sciences Research (OBSSR), convenes a series of guest lectures and symposia on selected topics in the behavioral and social sciences. These presentations by prominent behavioral and social scientists provide the NIH community with overviews of current research on topics of scientific and social interest. The lectures and symposia are approximately 50 minutes in length, with additional time for questions and discussion. All seminars are open to NIH staff and to the general public.
NIH Video Casting
sexta-feira, 26 de dezembro de 2008
In conclusion, Lazar offers a note of hope regarding the long-term prognosis of aphasia. There is incredible plasticity in the brain demonstrated by the ability of aphasics to regenerate language function even after devastating lesions. Although most improvement comes in the first months after treatment, patients do continue to improve over time, he insists, well beyond the three- to six-month window conventional wisdom allows. Speech language therapy can have a tremendous benefit for the functioning of aphasic patients. Language function can even relocate to another hemisphere following injury to the brain. As Lazar concludes, "there is hope for us all."
Robert Pollin is Professor of Economics at the University of Massachusetts in Amherst. He is the founding co-director of the Political Economy Research Institute (PERI). His research centers on macroeconomics, conditions for low-wage workers in the U.S. and globally, the analysis of financial markets, and the economics of building a clean-energy economy in the U.S. His books include A Measure of Fairness: The Economics of Living Wages and Minimum Wages in the US and Contours of Descent: US Economic Fractures and the Landscape of Global Austerity.
quinta-feira, 25 de dezembro de 2008
Leading neuroscientists gathered at this C250 symposium to discuss the accomplishments and limitations of reductionist and holistic approaches to examining the nervous system and mental functions.
Historically, neural scientists have taken one of two somewhat parallel approaches to the complex problem of understanding the biological mechanisms that account for mental activity. The first, or molecular model, analyzes the nervous system in terms of its elementary components, by examining one molecule, cell, or circuit at a time. The second, or cognitive model, focuses on mental functions in human beings and animals in an attempt to relate behavior to higher-order features of large systems of neurons.
The symposium "Brain and Mind," at Miller Theatre May 13 and 14, helped outline the accomplishments and limitations of these two approaches in attempts to delineate the problems that still confront neural science. Organized by Tom Jessell, professor of biochemistry and molecular biophysics, and Joanna Rubinstein, senior associate dean for institutional and global initiatives at Columbia University Medical Center, the symposium featured a number of distinguished faculty members, including Eric Kandel, Columbia's Nobel Prize–winning neurophysiologist, as well as visiting scholars from the National Institutes of Health, Rockefeller University, King's College London, Caltech, MIT, and elsewhere.
The course of the program, according to Rubinstein, was to "turn from reductionist to holistic approaches," looking first at what is known about cells and neural networks before addressing research into perceptions and behaviors. Participating scholars discussed current understandings and answers to key questions: How do the actions of individual neurons shape the function of neural populations? What is the underlying logic of signaling in complex neural circuits? How do dynamic mechanisms modify the processing of this information? And ultimately, how does the activity of neural ensembles generate cognitive and emotional behavior?
They also confronted some of the enduring mysteries regarding the biology of mental functioning: How does signaling activity in different regions of the visual system permit us to perceive discrete objects in the visual world? How do we recognize a face? How do we become aware of that perception? How do we reconstruct that face at will, in our imagination, at a later time and in the absence of ongoing visual input? What are the biological underpinnings of our acts of will?
Brain and Mind Video Archive
By John R. Searle
John Searle was born in Denver, Colorado, and educated at the universities of Wisconsin and Oxford, where he was a Rhodes Scholar. He holds all of his degrees, BA, MA, and DPhil, from Oxford, and he had his first teaching appointment there as a lecturer at Christ Church. Since 1959, he has been a professor at the University of California in Berkeley, where he holds the chair of Mills Professor of the Philosophy of Mind and Language. He has been a visiting professor at a very large number of universities both in the United States and internationally, including the universities of Oxford, Paris, Berlin, Frankfurt, Venice, Rome, Florence, Prague, Graz, Aarhus, Oslo, Campinas, and Lugano. He is the author of 15 books and over 150 articles. His work has been translated into 21 languages.
Professor Searle holds honorary degrees from universities in four different countries. Among his best known books are Expression and Meaning: Studies in the Theory of Speech Acts, Minds, Brains and Science, Intentionality: An Essay in the Philosophy of Mind, The Rediscovery of the Mind, The Mystery of Consciousness, The Construction of Social Reality, Rationality in Action, and Consciousness and Language.
At the beginning of the investigation of consciousness, we need to remind ourselves of what we already know.
1. Consciousness, (subjective, qualitative, intentionalistic and unified) really exists as a real part of the biological world. It cannot be eliminated or reduced to something else.
2. All conscious states are caused by lower-level neuronal processes in the brain.
3. Consciousness is realized in the brain as a higher-level or system feature.
4. Consciousness functions causally in producing the behavior of conscious organisms.
Confusions about these four are common and derive from a set of mistaken assumptions. The main sources of the mistakes are the traditional dualistic vocabulary of mental and physical, the ambiguous concept of reduction, the traditional (from Hume) conception of causation and the concept of identity. Once we get over the obstacles created by these confusions, we will not have solved the problem of consciousness, but we will at least have removed some of the major obstacles to its solution.
Brain and Mind Video Archive
quarta-feira, 24 de dezembro de 2008
terça-feira, 23 de dezembro de 2008
Fred H. Gage is Professor and Vi and John Adler Chair for Research on Age-Related Neurodegenerative Diseases Laboratory of Genetics
Fred H. Gage, a professor in the Laboratory of Genetics, concentrates on the adult central nervous system and unexpected plasticity and adaptability to environmental stimulation that remains throughout the life of all mammals. His work may lead to methods of replacing or enhancing brain and spinal cord tissues lost or damaged due to Neurodegenerative disease or trauma.
Gage's lab showed that, contrary to accepted dogma, human beings are capable of growing new nerve cells throughout life. Small populations of immature nerve cells are found in the adult mammalian brain, a process called Neurogenesis. Gage is working to understand how these cells can be induced to become mature functioning nerve cells in the adult brain and spinal cord. They showed that environmental enrichment and physical exercise can enhance the growth of new brain cells and they are studying the underlying cellular and molecular mechanisms, that may be harnessed to repair the aged and damaged brain and spinal cord.
About the Series: Grey Matters: Molecules to MindDuring the past decade, there have been dramatic advancements in the brain and cognitive sciences. For the first time, understanding how the brain works has become a scientifically achievable goal.
In this new lecture series, Grey Matters: Molecules to Mind, San Diego's leading Neuroscientists explore the human brain. The first lecture in this series addresses an issue that has often been absent in these discussions: what role do stem cells play in development of the brain?
segunda-feira, 22 de dezembro de 2008
talk for the Freethought Association of Western Michigan
domingo, 21 de dezembro de 2008
sábado, 20 de dezembro de 2008
sexta-feira, 19 de dezembro de 2008
Tuesday, March 04, 2008
Dana Center, Washington, DC
The Dana Foundation released at a news conference on March 4, Learning, Arts, and the Brain, a three-year study at seven universities, which finds strong links between arts education and cognitive development. Speakers included Michael Gazzaniga, Ph.D., UC, Santa Barbara; Michael Posner, Ph.D., University of Oregon; Elizabeth Spelke, Ph.D., Harvard University and Brian Wandell, Ph.D., Stanford University. Guy Mckhann, M.D., Johns Hopkins University gave a summary and Dana Gioia, chairman of the National Endowment for the Arts spoke of the study’s importance to the field of education.
Event Transript (PDF)
quinta-feira, 18 de dezembro de 2008
quarta-feira, 17 de dezembro de 2008
Naomi Oreskes is Provost of Sixth College, Professor of History and Science Studies and Adjunct Professor of Geosciences at UC San Diego and one of the nation's leading experts on the history of the earth and environmental science. Her work came to public attention in 2004 with the publication of "The Scientific Consensus on Climate Change" in Science and was featured in Vice President Gore's film, An Inconvenient Truth. Her forthcoming book is FIGHTING FACTS: How a Handful of Scientists Have Muddied the Waters on Environmental Issues From Tobacco to Global Warming.
segunda-feira, 15 de dezembro de 2008
domingo, 14 de dezembro de 2008
Chris is senior correspondent for The American Prospect magazine and author of two books, including the New York Times bestselling The Republican War on Science—dubbed “a landmark in contemporary political reporting” by Salon.com and a “well-researched, closely argued and amply referenced indictment of the right wing’s assault on science and scientists” by Scientific American —and Storm World: Hurricanes, Politics, and the Battle Over Global Warming —dubbed “riveting” by the Boston Globe and selected as a 2007 best book of the year in the science category by Publisher’s Weekly. He also writes "The Intersection" blog with Sheril Kirshenbaum.
sábado, 13 de dezembro de 2008
By Jeffrey McNeely, chief conservation scientist for the International Union for Conservation of Nature.
McNeely is considered a leading expert on conservation and is the co-author of "Ecoagriculture: Strategies to Feed the World and Save Wild Biodiversity" (Island Press, 2002). He has consulted for governments and conservation organizations on conservation policy and practice.
Ecoagriculture Working Group at Cornell
Department of Natural Resources
Dr. Niles Eldredge, curator of fossil invertebrates at the American Museum of Natural History, delivered the keynote address at the Paleontological Research Institution's second annual Summer Symposium, July 25, 2008, at Ithaca's Museum of the Earth.
By Dr. Carl Wieman
About the Video
In his talk, "Science education in the 21st century: Using the tools of science to teach science", Nobel Laureate Dr. Carl Wieman emphasizes the importance of making science education effective and relevant for a large and diverse population. The approach, he says, is to transform how students understand and use science, and this calls for teaching them to actually think like scientists.
Sponsored by Cornell's Center for Teaching Excellence.
University of California San Diego
La Jolla, CA
Feb 8th, 2008
By Gerd Gigerenzer is Director of the Center for Adaptive Behavior and Cognition at the Max Planck Institute for Human Development in Berlin and former Professor of Psychology at the University of Chicago. He won the AAAS Prize for the best article in the behavioral sciences. He is the author of Calculated Risks: How To Know When Numbers Deceive You, the German translation of which won the Scientific Book of the Year Prize in 2002. He has also published two academic books on heuristics, Simple Heuristics That Make Us Smart (with Peter Todd & The ABC Research Group) and Bounded Rationality: The Adaptive Toolbox (with Reinhard Selten, a Nobel laureate in economics). Source: Edge
About the Lecture
Acccording to the speaker, human beings tend to think of intelligence as a deliberate, conscious activity guided by the laws of logic. Yet, he argues, much of our mental life is unconscious, based on processes alien to logic: gut feelings, or intuitions. Dr. Gigerenzer argues that intuition is more than impulse and caprice; it has its own rationale. This can be described by fast and frugal heuristics, which exploit evolved abilities in the human brain. Heuristics ignore information and try to focus on the few important reasons. Says Gigerenzer: "More information, more time, even more thinking, are not always better, and less can be more." His talk is part of an ongoing series on "Behavioral, Social and Computational Sciences Seminars" organized by the UC San Diego division of the California Institute for Telecommunications and Information Technology (Calit2), which aims to bring the benefits of computational science to disciplines that have largely been by-passed by the information-technology revolution until now.
This lecture will try to give you a feel of Einstein's "Theory of General Relativity" and show how, over the last 90 years, it has stood up to all the observational tests that have been made.
Einstein's Special Theory of Relativity postulates that nothing can travel through space faster than the speed of light, 3 x 105 km s-1. (The word 'through' has been highlighted as the expansion of space can carry matter apart faster than the speed of light.) Perhaps a rather far-fetched thought experiment will help to make it clear that, if this is the case, Newton's Law of Gravity cannot be totally correct. Suppose that the Sun could suddenly cease to exist. Under Newton's theory of gravity, the Earth would instantly fly off at a tangent. Einstein realised that this could not be the case. Not only would we not be aware of the demise of the Sun for 8.31 minutes - the time light takes to travel from the Sun to the Earth - the Earth must continue to feel the gravitational effects of the Sun for just the same time, and would only fly off at a tangent at the moment we ceased to see the Sun. This assumes, of course, that whatever carries the information about the gravitational field of the Sun will also propagate at the speed of light. So something has to propagate through space to carry the information about a change in gravity field. Einstein thus postulated the existence of gravitational waves that would carry such information. As we will see later, the existence of such gravitational waves has already been shown indirectly and it is likely that, before long, direct evidence will be gained.
In 1915, Einstein published his 'General Theory of Relativity' often called 'General Relativity' which is essentially a theory of gravity. Objects in our Universe exist in a four dimensional space-time continuum which combines the three co-ordinates of space with a fourth co-ordinate, time. For simplicity, in what follows, the term "space" will be used. In the absence of mass, Einstein's theory predicts that space is 'flat'. This is a rather unfortunate term as it seems to imply a 2 dimensional plane surface. In fact, it simply means that light will travel in straight lines, so two initially parallel beams of light remain parallel. In "flat" space a triangle in any orientation will have inscribed angles that add up to 180 degrees. Euclidian Geometry holds true! (Personally, I would like to start a movement to stop calling space "flat" and use the terms "Euclidian" or "zero curvature" space instead!)
If a mass is now introduced into flat space it makes the space positively curved so that two initially parallel beams of light will converge and the inscribed angles of triangles will add up to more than 180 degrees. A simple, two dimensional, analogy is a flat stretched sheet of rubber. Ball bearings rolled across it will travel in straight lines. If a heavy ball is now placed on the rubber sheet, it will cause a depression and should a ball bearing come close, it will follow a curved path. In just the same way, the space around our Sun is positively curved and the Earth is simply following its natural path through this curved space - there is no force acting on it.
Imagine a spherical world where the inhabitants believe totally (but incorrectly) that the surface is flat. In the region near their north pole the icy surface is virtually frictionless. The inhabitants use a hovercraft like transport so that, once moving, the craft experience no frictional forces. Two craft, 10 km apart, and at identical distance from the pole, set off at the same time and at the same speed heading due north on parallel paths across the ice. As they believe that the surface is flat, they will expect to remain this distance apart as they travel across the ice. They will thus be somewhat surprised (and possibly hurt) when they collide at the North Pole! In order to maintain their belief that the surface of their world is flat they could postulate a force, that they might call 'gravity', to explain why their craft were drawn together. In the same way, we postulate the force of gravity to explain what we observe in the (incorrect) belief that three dimensional space is flat in the vicinity of mass, not curved.
Gravity is a force that we "invent" to explain what we observe happening (such as the planets going around the Sun) in the belief that space is flat when it is, in fact, positively curved.
The first "test" of Einstein's theory was its application to the orbit of Mercury. As Kepler's first law of planetary motion tells us, the orbit of Mercury should be an ellipse with the Sun at one focus. The point of closest approach, called its perihelion, would remain fixed in space if the Sun was a sphere and there were no other planets. However the oblateness of the Sun and perturbations caused by the other planets cause the orbit to "precess" - think of the patterns produce by a "spirograph". Accurate observations showed that the observed value of this precession, 5599.7 arc seconds per century, disagreed with that calculated from Newton's theory by 43.0 arc seconds per century. The application of Einstein's theory provided a correction term of 42.98+/- 0.04 arc seconds - exactly that required to remove the anomaly!
It was realised that Einstein's theory could be tested by observing the positions of stars when observed close to the Sun. Einstein's theory predicted that the positions of the nearest stars would be shifted by just 1.75 arc seconds - close to the limitations in measurement accuracy due the atmosphere. (It should be noted that Newton also predicted, for different reasons, that light waves should be bent when passing the Sun, but effect due to the distortion of space-time in Einstein's theory is twice that predicted by Newton's theory.)
We cannot usually measure the positions of stars close to the Sun except during a total eclipse of the Sun and thus the eclipses of 1919 and 1922 which followed the publication of Einstein's theory played a significant role in the history of science. In essence the plan was simple. Prior to a solar eclipse, take images of the sky where the Sun would be during totality. Take the same images during totality - the only time when stars can be seen close to the Sun's position - and compare the positions of the stars.
Sir Arthur Eddington led the British eclipse expedition to the Atlantic Island of Principe, whilst a second set of observations were made from Sobral in Brazil. . Fortuitously, at the time of the eclipse, the Sun lay in front of the Hyades star cluster, so giving a rich field of stars to make measurements from - comparing individual star positions made during the eclipse with earlier observations of the cluster. The telescopes used thus had to be portable and this limited their accuracy. The control images obviously had to be taken at night when it would have been colder than during the day time. Even disregarding these problems, the experiment was not easy. The anticipated deflection of 1.6 arc seconds has to be compared with the typical size of a stellar image as observed from the ground (due to atmospheric turbulence) of 1 to 2 arc seconds.
The data from the observations were not quite as conclusive as was implied at the time. The telescope at Principe was used to take 16 plates, but partial cloud reduced their quality. Two usable plates from the telescope on Principe, though of a poor quality, suggested a mean deflection of 1.62". Two telescopes were used at Sobral where conditions were superb; sadly however the focus of the main instrument shifted, probably due to temperature changes, and the stellar images were not clear. They were thus difficult to measure and produced a result of ~ 0.93 arc seconds. A smaller 10 cm instrument did, however, produce 8 clear photographic plates and these showed a mean deviation of 1.98 +/- 0.12 arc seconds. If all the data had been included, the results would have been inconclusive, but Eddington, with little justification, discounted the results obtained from the larger Sobral telescope and gave extra weight to the results from Principe (which he had personally recorded). On November 6th that year the Astronomer Royal and the President of the Royal Society declared the evidence was decisively in favour of Einstein's theory. However there were many scientists who, at the time, felt there were good reasons to doubt whether the observations had been able to accurately test the theory.
A more positive test of the theory came from observations made by William Campbell's team from the Lick Observatory who observed the 1922 eclipse from Australia. They determined a stellar displacement of 1.72 +/- 0.11 arc seconds. Campbell had believed that Einstein's theories were wrong, but when his experiment proved exactly the opposite, he immediately admitted his error and thereafter supported relativity. (One tends to believe an experiment when the results do not agree with the expectations of the observer!)
If the Sun's mass can produce a small shift in the position of a distant object, so also should the mass of a galaxy. Occasionally, a galaxy will be close to the line of sight of a more distant object. The mass of the galaxy distorts the space around it forming a "gravitational lens". Depending on the relative positions, this lens can form multiple images of the distant object or even spread its light or radio emission into an arc or ring - called an Einstein ring. In 1977, observations by the Lovell Telescope at Jodrell Bank discovered two quasars whose positions were close (~ 6 arc seconds) to that of a foreground galaxy. Quasars are very distant bright radio sources that appear like stars on photographic plates - hence their full name "quasi-stellar-object" which means "looking like a star". Now called the "Double Quasar", it was soon realised that we were observing two images of the same object. But there is a subtle difference. The path length through space between us and the quasar is longer for one of the images by a distance of 417 light days. We thus, simultaneously, see it at two times in its existence - separated by 417 days! Time and space do interact showing why space-time is implicit in Einstein' theory.
[One might wonder how the time difference has been measured. Quasars are giant galaxies which, at their heart have a "supermassive black hole". Stellar material falling in towards the black hole provide the energy source of the quasar, and as the rate at which material is consumed varies so does the energy output of the quasar. The effect is that the brightness varies with time. Suppose the image whose light has travelled furthest is seen to increase rapidly by 10%. Then we would see the image whose light travels less far would increase by the same amount some time later determined by the difference in path length. By comparing the brightness curves of the two images, a "match" was found when the time difference was 417 days.]
In the 1960's Irwin A. Shapiro realised that there was another, and potentially far more accurate, way of testing Einstein's theory. Shapiro was a pioneer of radar astronomy and realised that the time that a radar pulse would take to travel to and from a planet would be affected if the pulse passed close to the Sun. In the accompanying diagram, (a) shows the direct path that a radar pulse would take to and from Mars if we could imagine that the Sun was not present and that, as a consequence, space was flat. Path (b) on the diagram shows that, due the curvature of space a radar pulse sent along this precise path would curve away to the left and not reach Mars. The pulse that would reach Mars, shown as path (c), has to take a path slightly to the right of its true position so the curvature of space near the Sun would deflect it towards Mars. The echo would follow exactly the same path in reverse. As the pulse has had to follow a longer route to Mars and back it will obviously take longer than if the Sun was not present. The radar pulse will thus be delayed. The 'Shapiro Delay', as it is called, can reach up to 200 microseconds and provided an excellent test of Einstein's theory.
Further tests, of even higher accuracy, using the Shapiro delay have been made by monitoring the signals from spacecraft as the path of the signals passed close to the Sun. In 1979, the Shapiro delay was measured to an accuracy of one part in a thousand using observations of signals transmitted by the Viking spacecraft on Mars. More recently, observations made by Italian scientists using data from NASA's Cassini spacecraft, whilst en route to Saturn in 2002, confirmed Einstein's theory of general relativity with a precision 50 times greater than previous measurements. At that time the spacecraft and Earth were on opposite sides of the Sun separated by a distance of more than 1 billion kilometres (approximately 621 million miles). They precisely measured the change in the round-trip travel time of the radio signal as it travelled close to the Sun. A signal was transmitted from the Deep Space Network station in Goldstone California which travelled to the spacecraft on the far side of the Sun and there triggered a transmission which returned back to Goldstone. New techniques enabled the effects of the solar atmosphere on the signal to be eliminated so giving a very precise round trip travel time. The Cassini experiment confirmed Einstein's theory to an accuracy of 20 parts per million.
Though not specifically a "test" of Einstein's theories, the Global Positioning Satellite network (GPS) is beautiful illustration to show that, if Einstein's two theories are not taken into account, then the GPS system could not function. GPS essentially works by the transmission of highly accurate timing signals from a constellation of satellites orbiting the Earth. By "knowing" where the satellites are when they transmit their time signals, a receiver on the ground can calculate the distances from each observed satellite and hence where on the surface of the Earth it must be. The timing signals are derived from hydrogen maser atomic clocks carried in each satellite. They orbit the Earth at a height of ~20,200 km while moving at a speed of ~14,000 km per second. Both these statements are significant. Einstein's special theory of relativity shows that a moving clock, when observed from a body at rest, will appear to run slow. The result is that, if the hydrogen maser is set to give precise timing signals on the ground, it will appear to run slow when in orbit by 7 microseconds per day. One might thus set the clock to run fast on the ground so that, when in orbit, it runs at the correct rate.
But this would ignore Einstein's general theory of relativity. At a height of 20,200 km, the value of the acceleration due to gravity, g, is reduced by a quarter as compared to that measured on the Earth's surface. Clocks run faster in weaker gravitational fields and this effect would make the clocks run fast by 33 microseconds per day. In order to run at the correct rate in orbit the clocks have to be made to run slow by ~ 28 microseconds per day when calibrated on the ground!
The next major advance in testing Einstein's theory came with the discovery, by Russell Taylor and Joseph Hulse in 1974, of the first 'binary pulsar'. As observations of pulsars are key to what follows a brief resume of their origin and properties is in order. In the latter phases of their life, nuclear fusion in the cores of massive stars builds up the elements to iron, the element with the most stable nucleus. When the core has converted all its mass into iron, nuclear fusion stops and gravity makes the core collapse. The vast majority of the protons fuse with electrons to give neutrons and finally, when the core is about 20km in diameter, 'neutron degeneracy pressure' halts further collapse. Not surprisingly, the resulting object is called a neutron star. The density is incredibly high - one cubic centimetre would weigh about 10 time greater that Mount Everest - and it has a very powerful magnetic field, perhaps 600 trillion times stronger that that of the Earth. The progenitor star will have been rotating relatively slowly, as will its core. As the core collapses down, conservation of angular momentum causes the rotation speed of the core to greatly increase, rotating initially perhaps 60 times per second.
The magnetic field axis of the neutron star will usually be inclined to its rotation axis. This rotating field accelerates particles which give rise to beams of radio emission, in some cases with light and x-ray emission as well. The two beams, from above the north and south magnetic poles, sweep around the sky rather like that from a lighthouse. Should one or, rarely, both of these beams sweep across the Earth's position in space radio telescopes will pick up a short pulse of energy. These radiating neutron stars thus rapidly gained the name 'pulsar'. The first was discovered by Jocelyn Bell in 1967. The radiation of energy away from the pulsar takes energy out of the system and thus their rotation will gradually slow down. But, as you can imagine, the rotational energy in a 1.4 solar mass object spinning at a typical rate of 10 rotations per second is enormous so the slow down rates is very slow and thus pulsars make exceedingly good clocks - comparable to atomic clocks on Earth. In fact, when the first pulsar was discovered, it was not initially thought that a natural phenomenon could give rise to such accurately timed pulses and it was suspected that perhaps it was a signal from an alien race. Its first, unofficial, name was LGM1 - Little Green Men one!
It is the fact that pulsars are such accurate clocks that have made them such valuable tools with which to test Einstein's theory. In the 'binary pulsar' system discovered by Taylor and Hulse, a 1.4 solar mass pulsar is orbiting a companion star of equal mass. It thus comprises two co-rotating stellar mass objects. General Relativity predicts that such a system will radiate gravitational waves - ripples in space-time that propagate out through the Universe at the speed of light. Though gravitational wave detectors are now in operation across the globe, this gravitational radiation is far too weak to be directly detected. But there is a consequent effect that can be detected. As the binary system is loosing energy as the result of its gravitational radiation, the two stars should gradually spiral in towards each other. The fact that one of these objects is a pulsar allows us to very precisely determine the orbital parameters of the system. Precise observations over the 40 years since it was first discovered, shown in the diagram show how the two bodies are slowly spiralling in towards each other, exactly agreeing with Einstein's predictions! Taylor and Hulse received the Nobel Prize for Physics in 1993 for this outstanding work.
It is another pulsar system, this time where both objects in the system are pulsars, and called the 'Double Pulsar', that has produced the most stringent test of General Relativity to date. It was discovered in a survey carried out at the Parkes Telescope in Australia using receivers and data acquisition equipment built at the University of Manchester's Jodrell Bank Observatory. In analysis of the resulting data using a super-computer at Jodrell Bank the double pulsar was discovered in 2003. It comprises two pulsars of masses 1.25 and 1.34 solar masses spinning with rotation rates of 2.8 seconds and 23 milliseconds respectively. They orbit each other every 2.4 hours with an orbital major axis just less than the diameter of the Sun. The neutron stars are moving at speeds of 0.01% that of light and it is thus a system in which the effects of general relativity are more apparent than any other known system. At this moment in time, General Relativity predicts that the two neutron stars should be spiralling in towards each other at a rate of 7mm per day. Observations made across the world since then, including those using the Lovell Telescope at Jodrell Bank, have show this to be exactly as predicted.
In fact, five predictions of General Relativity can be tested in this unique system. The one that has provided the highest precision is a measurement of the Shapiro delay. By great good fortune, the orbital plane of the two pulsars is almost edge on to us. Thus, when one of the two pulsars is furthest away from us its pulses have to pass close to the nearer one on their way to our radio telescopes. They will thus have to travel a longer path through the curved space surrounding the nearer one and suffer a delay that is close to 92 microseconds. The timing measurements agree with theory to an accuracy of 0.05%. Einstein must be at least 99.95% right!
As the two neutron stars at gradually getting closer, at some point in the future they will coalesce to form what may well be a black hole. As they finally merge into one there what I can only call a gravitational wave "tsunami" is produced. The predicted strength of this gravitational wave is sufficient for it to be detected by the gravitational wave detectors now in operation on the Earth; two in North America, two in Europe and one in Japan.
The way that they could detect such a gravitational wave can perhaps be understood by a "possible" way to detect a tsunami wave crossing an ocean. Suppose, in a "thought experiment", two boats are spaced one kilometre apart and an accurate laser system measures the distance between them. Should a tsunami wave first reach one of them. The boat will carry out a circular motion as the wave passes beneath thus making a small momentary change in the two boats separation which will be detected by the laser system. Some time later the wave will reach the second boat and the separation will again show a deviation. Note, however, that a tsunami wave which, coming side-on and reaching both boats simultaneously would not be detected as the boats motion would be at right angles to the distance being measured. To overcome this one might well have three boats to make a right angle triangle and so waves reaching the boats from any angle could be detected.
This is exactly similar to the gravitational wave detectors such as "LIGO" - the Laser Interferometer Gravitational Wave Observatory - in North America. LIGO uses a device called a laser interferometer, which measures the time it takes light to travel between suspended mirrors to very high precision using laser light. Two mirrors, 4 kilometres apart, form one "arm" of the interferometer, and two further mirrors make a second arm perpendicular to the first forming an L shape. Laser light enters the system at the corner of the L and a beam splitter divides the light between the arms. The laser light reflects back and forth between the mirrors repeatedly before it returns to the beam splitter. Any deviations in the path lengths can be measured with extreme precision - movements as small as one thousandth the diameter of a proton can be measured! To achieve this, the mirrors and the light paths between them are housed in one of the world's largest vacuum systems, with a volume of nearly 300,000 cubic feet, evacuated to a pressure of only one-trillionth of an atmosphere. High-precision, vibration-isolation systems are needed to shield the suspended mirrors must from natural vibrations such as those produced by earth tremors.
To date, no gravitational waves have been detected. Gravitational wave detectors will not detect the merging of the two neutron stars in the double pulsar for they are predicted to merge in 84 million years time! However we believe that such binary systems are common and that such an event should happen on time scales of a few years within this galaxy - so watch this space! By 2015 the sensitivity of the LIGO systems will be greatly enhanced and then events happening over much of our local universe could be detected - the direct detection of gravitational wave cannot be far off!
However, though we are now showing the Einstein's theory holds true to high precision, this cannot be the whole story.
One of the most perplexing problems in theoretical physics at the present time is the attempt to harmonize the theory of general relativity, which describes gravitation and applies to the large-scale structure of the universe (including stars, planets, galaxies), with quantum mechanics, which describes the fundamental forces acting at the atomic scale of matter. It is commonly thought that quantum mechanics and general relativity are irreconcilable, but general relativity can be linked to massless particles called gravitons. There is no proof of their existence, but quantized theories of matter necessitate their existence and they would act as "messenger particles" carrying information about changes in mass distribution in the same way that the other fundamental forces have messenger particles - for example photons are the messengers of the electromagnetic force and gluons are the messengers of the strong force (which keeps groups of three quarks bound together to form protons and neutrons).
The graviton is an essential element of much modern theoretical physics and one major effort of the Large Hadron Collider, the world's largest particle accelerator and collider and which is scheduled to come into operation this year, is to provide evidence for their existence though it will not be able to detect them as such.
One problem is that the force of gravity is ~1039 times weaker than the other electrical forces that control the universe. One idea is that is that gravity may in fact have an intrinsic strength similar to that of the other forces, but appears weaker because it operates in a higher-dimensional space. This provided a link with string theories where there may, in fact, be 11 dimensions in all. Six of these are tightly curled and form the fundamental particles - called strings. The way in which these vibrate defines the type of particle. Four further dimensions are those of space and time which thus leaves one further dimension. Some think that gravitons can "leak out" into this hidden dimension so that gravity appears to be far weaker than it actually is.
We have a lot to learn!