Category Mathematics

Zero

By Zealous Creative

Alan Turing’s Biology Paper

From Wired, by Brandon Keim

Near the end of his life, the great mathematician Alan Turing wrote his first and last paper on biology and chemistry, about how a certain type of chemical reaction ought to produce many patterns seen in nature.

Called “The Chemical Basis of Morphogenesis,” it was an entirely theoretical work. But in following decades, long after Turing tragically took his own life in 1954, scientists found his speculations to be reality.

First found in chemicals in dishes, then in the stripes and spirals and whorls of animals, so-called Turing patterns abounded. Some think that Turing patterns may actually extend to ecosystems, even to galaxies. That’s still speculation — but a proof published Feb. 11 in Science of Turing patterns in a controlled three-dimensional chemical system are even more suggestion of just how complex the patterns can be.

On the following pages, Wired.com takes you on a Turing pattern tour.

Top images: Left: Alan Turing. (Ohio State University) Right: Patterns generated by a computer simulation of the Turing model. each is made by the same basic equation, with its parameters slightly tweaked (Shigeru Kondo & Takashi Miura/Science). Below: Turing pattern of cells in Dictyostelium, or a slime mold. Image: National Institutes of Health.

[ Continue ]

Complex Mathematical Problem Solved by Bees

Via PhysOrg

Bumblebees can find the solution to a complex mathematical problem which keeps computers busy for days.

Scientists at Queen Mary, University of London and Royal Holloway, University of London have discovered that bees learn to fly the shortest possible route between flowers even if they discover the flowers in a different order. Bees are effectively solving the ‘Travelling Salesman Problem’, and these are the first animals found to do this.

The Travelling Salesman must find the shortest route that allows him to visit all locations on his route. Computers solve it by comparing the length of all possible routes and choosing the shortest. However, bees solve it without computer assistance using a brain the size of grass seed.

Professor Lars Chittka from Queen Mary’s School of Biological and Chemical Sciences said: “In nature, bees have to link hundreds of flowers in a way that minimizes travel distance, and then reliably find their way home – not a trivial feat if you have a brain the size of a pinhead! Indeed such traveling salesmen problems keep supercomputers busy for days. Studying how bee brains solve such challenging tasks might allow us to identify the minimal neural circuitry required for complex problem solving.”

The team used computer controlled artificial flowers to test whether bees would follow a route defined by the order in which they discovered the flowers or if they would find the shortest route. After exploring the location of the flowers, bees quickly learned to fly the shortest route.

As well as enhancing our understanding of how bees move around the landscape pollinating crops and wild flowers, this research, which is due to be published in The American Naturalist this week, has other applications. Our lifestyle relies on networks such as traffic on the roads, information flow on the web and business supply chains. By understanding how bees can solve their problem with such a tiny brain we can improve our management of these everyday networks without needing lots of computer time.

Co-author and Queen Mary colleague, Dr. Mathieu Lihoreau adds: “There is a common perception that smaller brains constrain animals to be simple reflex machines. But our work with bees shows advanced cognitive capacities with very limited neuron numbers. There is an urgent need to understand the neuronal hardware underpinning animal intelligence, and relatively simple nervous systems such as those of insects make this mystery more tractable.”

So Long Mandelbrot

Via BBC

Benoît Mandelbrot, who discovered mathematical shapes known as fractals, has died of cancer at the age of 85.

Mandelbrot, who had joint French and US nationality, developed fractals as a mathematical way of understanding the infinite complexity of nature.

The concept has been used to measure coastlines, clouds and other natural phenomena and had far-reaching effects in physics, biology and astronomy.

Mandelbrot’s family said he had died in a hospice in Cambridge, Massachusetts.

The visionary mathematician was born into a Jewish family in Poland but moved to Paris at the age of 11 to escape the Nazis.

He spent most of his life in the US, working for IBM computers and eventually became a professor of mathematical science at Yale University.

His seminal works, Fractals: Form, Chance and Dimension and The Fractal Geometry of Nature, were published in 1977 and 1982. In these, he argued that seemingly random mathematical shapes in fact followed a pattern if broken down into a single repeating shape.

The concept enabled scientists to measure previously immeasurable objects, including the coastline of the British Isles, the geometry of a lung or a cauliflower.

“If you cut one of the florets of a cauliflower, you see the whole cauliflower but smaller,” he explained at the influential Technology Entertainment and Design (TED) conference earlier this year.

“Then you cut again, again, again, and you still get small cauliflowers. So there are some shapes which have this peculiar property, where each part is like the whole, but smaller.”

Powerful mind

Fractal mathematics also led to technological developments in the fields of digital music and image compression.

It has also been influential in pop culture, with the patterns being used to create beautiful and intricate pieces of art. One such design is named in his honour.

Mandelbrot was also highly critical of the world banking system, arguing the economic model it used was unable to cope with its own complexity.

In a statement, French President Nicolas Sarkozy praised Mandelbrot for his “powerful, original mind that never shied away from innovation and battering preconceived ideas”.

“His work, which was entirely developed outside the main research channels, led to a modern information theory,” he said.

Statistics is the New Grammar


Illustration: Ellen Lupton.

Via Wired.

Clive Thompson on Why We Should Learn the Language of Data:

How can global warming be real when there’s so much snow?”

Hearing that question — repeatedly — this past February drove Joseph Romm nuts. A massive snowstorm had buried Washington, DC, and all across the capital, politicians and pundits who dispute the existence of climate change were cackling. The family of Oklahoma senator Jim Inhofe built an igloo near the Capitol and put up a sign reading “Al Gore’s New Home“. The planet can’t be warming, they said; look at all this white stuff!

Romm — a physicist and climate expert with the Center for American Progress — spent a week explaining to reporters why this line of reasoning is so wrong. Climate change, he said, is all about trend lines. You don’t observe it by looking out the window but by analyzing decades’ worth of data. Of course, snowstorm spin is possible only if the public (and journalists) are statistically illiterate. “A lot of this is counterintuitive,” Romm admits.

Statistics is hard. But that’s not just an issue of individual understanding; it’s also becoming one of the nation’s biggest political problems. We live in a world where the thorniest policy issues increasingly boil down to arguments over what the data mean. If you don’t understand statistics, you don’t know what’s going on — and you can’t tell when you’re being lied to. Statistics should now be a core part of general education. You shouldn’t finish high school without understanding it reasonably well — as well, say, as you can compose an essay.

Consider the economy: Is it improving or not? That’s a statistical question. You can’t actually measure the entire economy, so analysts sample chunks of it — they take a slice here and a slice there and try to piece together a representative story. One metric that’s frequently touted is same-store sales growth, a comparison of how much each store in a big retail chain is selling compared with a year ago. It’s been trending upward, which has financial pundits excited.

Problem is, to calculate that stat, economists remove stores that have closed from their sample. As New York University statistician Kaiser Fung points out, that makes the chains look healthier than they might really be. Does this methodological issue matter? Absolutely: When politicians see economic numbers pointing upward, they’re less inclined to fund stimulus programs.

Or take the raging debate over childhood vaccination, where well-intentioned parents have drawn disastrous conclusions from anecdotal information. Activists propagate horror stories of children who seemed fine one day, got vaccinated, and then developed autism. Of course, as anyone with any exposure to statistics knows, correlation is not causation. And individual stories don’t prove anything; when you examine data on the millions of vaccinated kids, even the correlation vanishes.

There are oodles of other examples of how our inability to grasp statistics — and the mother of it all, probability — makes us believe stupid things. Gamblers think their number is more likely to come up this time because it didn’t come up last time. Political polls are touted by the media even when their samples are laughably skewed. (This issue breaks left and right, by the way. Intellectually serious skeptics of anthropogenic climate change argue that the statistical case is weak — that Al Gore and his fellow travelers employ dubious techniques to sample and crunch global temperatures.)

Granted, thinking statistically is tricky. We like to construct simple cause-and-effect stories to explain the world as we experience it. “You need to train in this way of thinking. It’s not easy,” says John Allen Paulos, a Temple University mathematician.

That’s precisely the point. We often say, rightly, that literacy is crucial to public life: If you can’t write, you can’t think. The same is now true in math. Statistics is the new grammar.

Research Concludes There is No ‘Simple Theory of Everything’ Inside the Enigmatic E8

Via PhysOrg:

Garibaldi did the math to disprove the theory, which involves a mysterious structure known as E8. The resulting paper, co-authored by physicist Jacques Distler of the University of Texas, will appear in an upcoming issue of Communications in Mathematical Physics.

“The beautiful thing about math and physics is that it is not subjective,” says Garibaldi. “I wanted a peer-reviewed paper published, so that the scientific literature provides an accurate state of affairs, to help clear up confusion among the lay public on this topic.”

In November of 2007, physicist Garrett Lisi published an online paper entitled “An Exceptionally Simple Theory of Everything.” Lisi spent much of his time surfing in Hawaii, adding a bit of color to the story surrounding the theory. Although his paper was not peer-reviewed, and Lisi himself commented that his theory was still in development, the idea was widely reported in the media, under attention-grabbing headlines like “Surfer dude stuns physicists with theory of everything.”

Garibaldi was among the skeptics when the theory hit the news. So was Distler, a particle physicist, who wrote about problems he saw with Lisi’s idea on his blog. Distler’s posting inspired Garibaldi to think about the issue more, eventually leading to their collaboration.

Lisi’s paper centered on the elegant mathematical structure known as E8, which also appears in string theory. First identified in 1887, E8 has 248 dimensions and cannot be seen, or even drawn, in its complete form.

The enigmatic E8 is the largest and most complicated of the five exceptional Lie groups, and contains four subgroups that are related to the four fundamental forces of nature: the electromagnetic force; the strong force (which binds quarks); the weak force (which controls radioactive decay); and the gravitational force.

In a nutshell, Lisi proposed that E8 is the unifying force for all the forces of the universe.

“That would be great if it were true, because I love E8,” Garibaldi says. “But the problem is, it doesn’t work as he described it in his paper.”

As a leading expert on several of the exceptional Lie groups, Garibaldi felt an obligation to help set the record straight. “A lot of mystery surrounds the Lie groups, but the facts about them should not be distorted,” he says. “These are natural objects that are central to mathematics, so it’s important to have a correct understanding of them.”

Using linear algebra and proving theorems to translate the physics into math, Garibaldi and Distler not only showed that the formulas proposed in Lisi’s paper do not work, they also demonstrated the flaws in a whole class of related theories.

“You can think of E8 as a room, and the four subgroups related to the four fundamental forces of nature as furniture, let’s say chairs,” Garibaldi explains. “It’s pretty easy to see that the room is big enough that you can put all four of the chairs inside it. The problem with ‘the theory of everything’ is that the way it arranges the chairs in the room makes them non-functional.”

He gives the example of one chair inverted and stacked atop another chair.

“I’m tired of answering questions about the ‘theory of everything,'” Garibaldi says. “I’m glad that I will now be able to point to a peer-reviewed scientific article that clearly rebuts this theory. I feel that there are so many great stories in science, there’s no reason to puff up something that doesn’t work.”

More information: Paper: http://arxiv.org/abs/0905.2658

How Your Brain Tells Time

Via Forbes, Jonathan Fahey:

Researchers are building a mathematical model of how brains keep time, and finding some surprises.

In the middle of your brain, there’s a personal assistant the size of a grain of rice. It’s a group of about 20,000 brain cells that keeps your body’s daily schedule.

Partly in response to light signals from the retina, this group of neurons sends signals to other parts of the brain and the rest of the body to help control things like sleep, metabolism, immune system activity, body temperature and hormone production on a schedule slightly longer than 24 hours.

Daniel Forger, a mathematics professor at the University of Michigan who uses math to study biological processes, wants to understand this brain region, called the suprachiasmatic nucleus (SCN) in excruciating detail. He is building a mathematical model of the entire structure that he thinks will shed important light on our circadian rhythm, and perhaps lead to treatments for disorders like depression and insomnia, and even diseases influenced by the internal clock like heart disease, Alzheimer’s and cancer.

“I think we’re going to be able to have a very accurate model of the circadian rhythm, all the key proteins, all the electric activity of all 20,000 neurons,” he says. “We’ll be able to track all of them for days on a timescale of milliseconds.”

Forger has already taken a few steps down this path and found some surprises. In a paper published in a recent issue of the journal Science, Forger, along with colleagues Mino Belle and Hugh Piggins of the University of Manchester in England and others, showed that the firing pattern of the time-keeping neurons in the SCN was not at all what researchers had long thought.

Researchers who studied the electrical activity of the SCN had believed that the neurons there helped the body keep time by sending lots of electrical signals during the day, and then falling silent at night. Makes sense. Lots of non-teenage creatures are active during the day and quiet at night.

But when Forger used experimental data to build a mathematical model of the electrical activity, he calculated that there should be lots of activity at dawn and dusk, and a state of “quiet alertness” during the day. That didn’t make much intuititve sense. Worse, the cellular chemistry during this quiet period that Forger’s model predicted would, in normal cells, lead quickly to cell death.

“Skepticism doesn’t begin to describe what I was met with,” says Forger. “Experimentalists told me, ‘That’s crazy.'”

Researchers in the field simply assumed Forger’s model was wrong. Forger refined it and reworked it, and got similar results. Meanwhile, his British colleagues began to probe the fact that there are two types of cells in the SCN, ones that have very strong molecular clocks and do the timekeeping, and others that behave more like normal brain cells.

While previous researchers had recorded the activity of all of the cells in the SCN, Belle and Piggins were able to set up an experiment using mice that would record only the activity of the clock cells. (Mammalian central clocks all seem to work the same way.) Their experimental results matched Forger’s predictions.

“When we got the results, they were shocking,” Forger says. “They were dead on.”

The cells in the SCN that don’t keep time followed the pattern researchers were familiar with, active during the day, quiet at night. The time-keeping cells went bananas in the morning and at night, but then during the day they stayed in a bizarre state of excitement during which they emitted very few impulses. Why these cells can stay alive in this state remains a mystery.

Forger has been down this path before. Another study of his, published in 2007, reversed the thinking on how gene mutations affect circadian rhythms within cells.

Scientists studying a hamster that had a malfunctioning internal clock (its daily rhythm lasted 20 hours instead of 24) found that it had a mutation in a gene called tau. The fuzzy rodent was given the extremely appropriate name “Tau Mutant Hamster.”

They thought Tau Mutant Hamster’s mutation caused an enzyme that helped cells keep time to be less active. Forger predicted that it would instead make the enzyme more active. Experiments later proved he was right.

Now Forger is turning his attention to the entire SCN. He thinks that math is the only way we can understand the sheer complexity of what is happening–neurotransmitters coming and going, protein clocks being built up and broken down, electricity bouncing around.

“To piece it all together, you need more than intuition,” he says. “You need math to see what’s going on.”

Spotted at Darren Brown’s blog.

Hollywood Movies Follow a Mathematical Formula

Via PhysOrg:

Psychologist Professor James Cutting and his team from Cornell University in Ithaca, New York, analyzed 150 high-grossing Hollywood films released from 1935 to 2005 and discovered the shot lengths in the more recent movies followed the same mathematical pattern that describes the human attention span. The pattern was derived by scientists at the University of Texas in Austin in the 1990s who studied the attention spans of subjects performing hundreds of trials. The team then converted the measurements of their attention spans into wave forms using a mathematical technique known as the Fourier transform.

They found that the magnitude of the waves increased as their frequency decreased, a pattern known as pink noise, or 1/f fluctuation, which means that attention spans of the same lengths recurred at regular intervals. The same pattern has been found by Benoit Mandelbrot (the chaos theorist) in the annual flood levels of the Nile, and has been seen by others in air turbulence, and also in music.

Cutting made his discovery by measuring the length of every shot in 150 comedy, drama and action films, and then converted the measurements into waves for every movie. He found that the more recent the films were, the more likely they were to obey the 1/f fluctuation, and this did not just apply to fast action movies. Cutting said the significant thing is that shots of similar lengths recur in a regular pattern through the film.

Cutting believes obeying the 1/f law makes films “resonate with the rhythm of human attention spans,” and this makes them more gripping. Films edited in this way would then tend to be more successful and the style of shooting and editing more likely to be copied. Films of Cutting’s own favorite genre, the Film Noir, do not generally follow the 1/f law, with shot lengths tending to be more random. By contrast The Empire Strikes Back (1980) and the 2005 blockbuster movie Star Wars Episode III (which Cutting considers to be “just dreadful”) both follow 1/f rigidly.

The researchers concluded that over the next few decades film makers may take more care to follow the 1/f law to try to boost audience engagement.

Pithoprakta: action through probability

From the Iannis Xenakis: Composer, Architect, Visionary press release:

Now on show at The Drawing Center, in the Main Gallery from January 15 – April 8, 2010. The exhibition will explore the fundamental role of drawing in the work of avant-garde composer Iannis Xenakis (1922–2001). One of the most important figures in late twentieth-century music, Xenakis originally trained as an engineer and was also known as an architect, developing iconic designs while working with Le Corbusier in the 1950s. This premiere presentation of Xenakis’s visual work in North America will be comprised of samples of his pioneering graphic notation, architectural plans, compelling preparatory mathematical renderings, and pre-compositional sketches—in all, nearly 100 documents created between 1953 and 1984. Iannis Xenakis: Composer, Architect, Visionary is co-curated by Xenakis scholar Sharon Kanach and critic Carey Lovelace and will travel to the Canadian Centre for Architecture (June 17 – October 17, 2010) and the Museum of Contemporary Art, Los Angeles (November 7, 2010 – February 13, 2011).

One of the world’s most widely performed contemporary composers, Xenakis brought together architecture, sound, and advanced contemporary mathematics, moving away from traditional polyphony to create music comprised of masses of sound, shifting abstract aural gestures, linear permutation, and sonic pointillism. A groundbreaking interdisciplinary approach was also apparent in his architectural creations, such as the Philips Pavilion, an icon of twentieth-century architecture, which Xenakis created under Le Corbusier for the 1958 Brussels World’s Fair. The design of the Philips Pavilion’s volumetric structure was inspired by the glissandi—glides between pitches—that made up Xenakis’s groundbreaking orchestral work Metastaseis (1953–54). The meticulously rendered works on view in the exhibition burst with kinetic energy and palpable sonic qualities, providing a singular insight into this extraordinary innovator’s process of “thinking through the hand.”