From ScienceNOW, by Govert Schilling
For the first time, astronomers have found a planet smack in the middle of the habitable zone of its sunlike star, where temperatures are good for life. “If this planet has a surface, it would have a very nice temperature of some 70° Fahrenheit [21°C],” says William Borucki of NASA’s Ames Research Center here, who is the principal investigator of NASA’s Kepler space telescope. “[It’s] another milestone on the journey of discovering Earth’s twin,” adds Ames director Simon “Pete” Worden.
Unfortunately, the true nature of the planet, named Kepler-22b, remains unknown. It is 2.4 times the size of Earth, but its mass, and hence its composition, has not yet been determined. “There’s a good chance it could be rocky,” Borucki says, although he adds that the planet would probably contain huge amounts of compressed ice, too. It might even have a global ocean. “We have no planets like this in our own solar system.”
Kepler-22b is 600 light-years away. Every 290 days, it orbits a star that is just a bit smaller and cooler than our own sun. The Kepler telescope, launched in 2009 to scan the skies for Earth-like worlds, found the planet because it sees the orbit edge on. That means that every 290 days, the world transits the surface of the star, blocking out a minute fraction of its light.
Borucki likes to call the new discovery the Christmas planet. “It’s a great gift,” he said at a press conference here this morning. “We were very fortunate to find it.” The first of the three observed transits occurred only days after Kepler started observing. The third one was seen just before Christmas 2010, shortly before the spacecraft was unable to carry out any observations because of a technical glitch. Says Borucki: “We could’ve easily missed it altogether.”
“There are two things really exciting about this planet,” adds Natalie Batalha, Kepler’s deputy science team leader. “It’s right in the middle of the habitable zone [the region around a star where temperatures are neither too high nor too low for liquid water to exist], and it orbits a star very similar to our sun.” Previously discovered “habitable” planets orbited dim, red dwarf stars, or they were located at the edge of the habitable zone, with more extreme temperatures.
At the press conference, which marked the start of the five-day First Kepler Science Conference here at NASA Ames, Batalha also announced 1,094 new planet candidates found by Kepler since February 2011, bringing the total to a whopping 2,326. So far, only 29 of these (including Kepler-22b) have been confirmed to be genuine planets, but Kepler scientists have good reasons to expect that at least 90% of all candidates will turn out to be real.
Forty-eight of these planet candidates orbit in the habitable zone of their parent stars. Most are substantially larger than Earth, but 10 are about the same size as our home planet. Some of these are in multiplanet systems. “It’s conceivable that any — or many — of these 48 habitable zone candidates, or their moons, could have life,” Borucki says.
Jill Tarter of the SETI Institute in Mountain View, Calif., says the habitable Kepler planets are prime targets for the Search for Extra-Terrestrial Intelligence (SETI), carried out with the dedicated 42-dish Allen Telescope Array in Northern California. “We’re taking everything we can get from our Kepler colleagues to look for techno-signatures” that might betray the existence of an alien civilization, she says.
So far, the question about extraterrestrial life is very much open. “We don’t know whether Earth as it is, and life as we know it, is very unusual or very common,” Tarter says. However, if scientists find a second place in the universe where life once got started, it will be obvious that life must be widespread. Says Tarter: “In this field, the number two is important. We count one, two, infinity.”
Image: Artist’s rendition of Kepler-22b (NASA)
Via BBC Health, by Helen Briggs
Social network sites may be changing people’s brains as well as their social life, research suggests.
Brain scans show a direct link between the number of Facebook friends a person has and the size of certain parts of their brain.
It’s not clear whether using social networks boosts grey matter or if those with certain brain structures are good at making friends, say researchers.
The regions involved have roles in social interaction, memory and autism.
The work, published in the journal Proceedings of the Royal Society B Biological Sciences, looked at 3-D brain scans of 125 university students from London.
Researchers counted the number of Facebook friends each volunteer had, as well as assessing the size of their network of real friends.
A strong link was found between the number of Facebook friends a person had and the amount of grey matter in certain parts of their brain.
The study also showed that the number of Facebook friends a person was in touch with was reflected in the number of “real-world” friends.
“We have found some interesting brain regions that seem to link to the number of friends we have – both ‘real’ and ‘virtual’,” said Dr Ryota Kanai, one of the researchers from University College London.
“The exciting question now is whether these structures change over time. This will help us answer the question of whether the internet is changing our brains.”
One region involved is the amygdala, which is associated with memory and emotional responses.
Previous research has shown a link between the volume of grey matter in the amygdala and the size and complexity of real world social networks. Grey matter is the brain tissue where mental processing takes place.
Three other areas of the brain were linked with the size of someone’s online social network but not their tally of real-world friends.
The right superior temporal sulcus has a role in perception and may be impaired in autism. The left middle temporal gyrus is associated with “reading” social cues, while the third – the right entorhinal complex – is thought to be important in memory and navigation.
Professor Geraint Rees, from UCL, who led the research, said little is understood about the impact of social networks on the brain, which has led to speculation the internet is somehow bad for us.
“Our study will help us begin to understand how our interactions with the world are mediated through social networks,” he said.
“This should allow us to start asking intelligent questions about the relationship between the internet and the brain – scientific questions, not political ones.”
Cause and effect
Facebook, the world’s most popular social networking site, has more than 800 million active users around the world. The site allows people to keep in touch with friends, from a handful to a thousand or more.
Dr John Williams, Head of Neuroscience and Mental Health at the Wellcome Trust, which funded the study, said: “We cannot escape the ubiquity of the internet and its impact on our lives, yet we understand little of its impact on the brain, which we know is plastic and can change over time.
“This new study illustrates how well-designed investigations can help us begin to understand whether or not our brains are evolving as they adapt to the challenges posed by social media.”
Although the study found a link between human brain structure and online social network size, it did not test cause and effect.
Dr Heidi Johansen-Berg, reader in Clinical Neurology at the University of Oxford’s Centre for Functional MRI of the Brain, said the study found only a weak relationship between the number of Facebook friends and the number of friends in the real world.
“Perhaps the number of Facebook friends you have is more strongly related to how much time you spend on the internet, how old you are, or what mobile phone you have,” she said.
“The study cannot tell us whether using the internet is good or bad for our brains.”
Direct Images of Other Worlds
Another well-studied planet orbits around Beta Pictoris: a sun-like star 63 light-years away. The planet is estimated to be eight times more massive than Jupiter and orbit at only 8 astronomical units, about the distance between the sun and Saturn. Some data suggests the planet is unusually wide, and one explanation for this would be that it is surrounded by a ring of its own, perhaps making it even more like Saturn.
Because it is much closer to its star than most other directly imaged planets, astronomers have been able to image this exoplanet at many points of its orbit. It was seen once in 2003 and again, on the other side of the star, in 2009. Researchers estimate that the planet should complete its orbit in about 15 years.
Astronomers can also see a dust-free gap around Beta Pictoris. Because the planet is in the middle of the gap, it is suspected of vacuuming up the gas and dust that exists around the young star.
From Wired, by Dave Mosher
A radio telescope array being built in the highest, driest desert in the world has photographed two colliding galaxies for its first public test shots.
The new images reveal a flurry of star formation within thick clouds of gas and dust at the Antennae Galaxies’ impact zone, 45 million light-years away. Older star-forming regions appear as a faint orange in the image while the youngest — some 3 to 4 million years old — glow bright yellow.
The same murky material that leads to star birth also blocks visible wavelengths of light, but the Atacama Large Millimeter/submillimeter Array (ALMA) in Chile’s high Atacama Desert sees radio wavelengths.
“In the past we couldn’t study them because they were behind the dust. The thing that’s been missing is the youngest stars, which are the most interesting,” said astronomer Brad Whitmore of the Space Telescope Science Institute in a webcast. “This is a beautiful example where we’ll be able to see the full life histories of star clusters.”
Gas and dust absorb the light of stars and then re-emit the energy in different wavelengths of light. Yet like black-out curtains, the thickest molecular dust clouds are too murky for almost any wavelength of light to escape.
Radio wavelengths are an exception. Similar to how curtains or even thick walls don’t absorb all of a local radio station’s broadcast, radio starlight can weave through dense molecular clouds, across the universe and reach human-built telescopes.
Frequencies of radio light that ALMA can detect don’t merely indicate the presence of hot young stars. The light also carries with it rich chemical information about the hearts of star-forming regions.
“For the last 25 years, we have really only relied on being able to see carbon monoxide or hydrogen cyanide,” said astronomer Kartik Sheth of the National Radio Astronomy Observatory in the webcast. “For the first time, we can see the entire chemical spectrum.”
To capture radio wavelengths of light, an international team of scientists and engineers have installed 22 of 66 planned radio antennas, each of which weighs nearly 100 tons and stretches 40 feet in diameter. The last of the $1 billion array’s antennas should be in place by 2013.
By then ALMA should have a resolution able to extract details between eight and 10 times better than any radio telescope on Earth, enough perhaps to peer into planet formation within the Milky Way galaxy.
And because ALMA’s collection area will be 70 percent larger than it is now, astronomers also expect a sensitivity up to 100 times better than other radio telescope arrays. Observers competing for limited time on the array can capture their images more quickly.
“ALMA’s test views of the Antennae show us star-forming regions on a level of detail that no other telescope on Earth or in space has attained,” said astronomer Mark McKinnon, a project manager of the telescope array, in a press release. “This capability can only get much better as ALMA nears completion.”
Visible light images don’t reveal star-creating regions well — there’s too much gas and dust in the way (bottom). Seen in longer-wavelength radiowaves, however, hotspots of star birth shine bright (orange/yellow patches, top). The younger the star birth, the brighter the signal.
Images: NRAO/AUI/NSF/ESO/NAOJ/NASA/ESA/B. Whitmore
[ Continue ]
From BBC News, by Katia Moskvitch
Cities could soon be looking after their citizens all by themselves thanks to an operating system designed for the metropolis.
The Urban OS works just like a PC operating system but keeps buildings, traffic and services running smoothly.
The software takes in data from sensors dotted around the city to keep an eye on what is happening.
In the event of a fire the Urban OS might manage traffic lights so fire trucks can reach the blaze swiftly.
The idea is for the Urban OS to gather data from sensors buried in buildings and many other places to keep an eye on what is happening in an urban area.
The sensors monitor everything from large scale events such as traffic flows across the entire city down to more local phenomena such as temperature sensors inside individual rooms.
The OS completely bypasses humans to manage communication between sensors and devices such as traffic lights, air conditioning or water pumps that influence the quality of city life.
Channelling all the data coming from these sensors and services into a over-arching control system had lots of benefits, said Steve Lewis, head of Living PlanIT- the company behind Urban OS.
Urban OS should mean buildings get managed better and gathering the data from lots of sources gives a broader view of key city services such as traffic flows, energy use and water levels.
“If you were using an anatomy analogy, the city has a network like the nervous system, talking to a whole bunch of sensors gathering the data and causing actions,” said Mr Lewis.
“We distribute that nervous system into the parts of the body – the buildings, the streets and other things.
Having one platform managing the entire urban landscape of a city means significant cost savings, implementation consistency, quality and manageability, he added.
“And it’s got local computing capacity to allow a building or an automotive platform to interact with people where they are, managing the energy, water, waste, transportation, logistics and human interaction in those areas.”
The underlying technology for the Urban OS has been developed by McLaren Electronic Systems – the same company that creates sensors for Formula One cars. The Urban OS was unveiled at the Machine-2-Machine conference in Rotterdam.
To support the myriad of different devices in a city the firm has developed an extensive set of application services that will run Urban OS, dubbed PlaceApps – the urban environment equivalent of apps on a smartphone.
Independent developers will also be able to build their own apps to get at data and provide certain services around a city.
Mr Lewis said that eventually applications on smartphones could hook into the Urban OS to remotely control household appliances and energy systems, or safety equipment to monitor the wellbeing of elderly people.
It could also prove useful in the event of a fire in a building, he said.
Sensors would spot the fire and then the building would use its intelligence to direct people inside to a safe stairwell, perhaps by making lights flicker or alarms get louder in the direction of the exit.
“That’s dealt with by the building itself, with the devices very locally talking to each other to figure out what’s the best solution for the current dilemma, and then providing directions and orchestrating themselves,” said Mr Lewis.
Living PlanIT is working with Cisco and Deutsche Telekom on different parts of the system.
Markus Breitbach of the Machine to Machine Competence Center at Deutsche Telekom said that his firm was helping to bring all the parts of the Urban OS together.
“Everybody’s talking about 50 billion connected devices, which effectively means huge amounts of data being collected, but nobody is really caring about managing it and bringing it into a context – and Urban OS can do just that,” he said.
“If there’s a fire alarm on the fifth floor and the elevator is going to the next floor, the light will switch on – but in addition the traffic lights will be switched accordingly to turn the traffic in the right direction so that fire workers can get through.
“And this is what Urban OS is providing, this kind of solution to analyse mass data, enter it in a context and perform magical actions.”
A test bed for the Urban OS is currently being built in Portugal. For its work in developing smart cities, Living PlanIT was selected as one of the World Economic Forum’s Technology Pioneers of 2012.
Fighting Dragons of Ara
Photograph by Michael Sidonio
Known as the fighting dragons of Ara, two colorful gas clouds appear to be posing in attack position in a picture that received honorable mention in the “Deep Sky” category.
Australian astroimager Michael Sidonio captured subtle hues of purple, orange, and green from the giant cloud of gas and dust, which sits 4,000 light-years from Earth in the southern constellation of Ara.
The 300-light-year-wide molecular cloud is being shaped by radiation from massive young stars formed inside during the past few million years.
Vela Supernova Remnant
Photograph by Marco Lorenzi
Hanging off one of the wings of the constellation Cygnus, the swan, is the nebulous Vela supernova remnant—all that remains of a star that exploded 12,000 years ago.
The cobweb-like structure, which lies more than 800 light-years from Earth, is seen expanding across a field of stars in this picture by Marco Lorenzi in Italy, winner of the “Deep Space” category.
“I’ve always been inspired by supernova remnants, in particular by their reach and their different compositions. After all, several of the building bricks of life are created during these apocalyptic events,” Lorenzi said in a press statement.
[ Continue ]
From National Geographic
On August 9 the sun shot out an X-class solar flare, the most intense type of flare, aimed directly at Earth. NASA’s Solar Dynamics Observatory captured this image of the flare in extreme ultraviolet light.
The megaflare unleashed charged particles from the sun, which can boost auroral displays but can also disrupt GPS and communications signals when they reach Earth.
NASA warned that the August 9 flare could cause scattered radio blackouts, but that an associated coronal mass ejection—a dense cloud of solar particles—would miss the planet, minimizing risks to satellites and the power grid.
Via ArchDaily, by Oscar Lopez
Not wanting to be outshone by the Americans and their developments with color television, the Philips Electronics Company decided to step away from displaying commercial goods and instead create a unique experience for the thousands of people that would be attending the Expo. The experiential space was created by putting together an international team consisting of an architect, an artist and a composer to create a pavilion displaying electronic technology in as many forms as possible, serving arts, culture, and the overall betterment of humankind.
The Philips electronics company turned to the office of Le Corbusier for the final commission of the pavilion. Le Corbusier replied by saying that, “I will not make a pavilion for you but an Electronic Poem and a vessel containing the poem; light, color image, rhythm and sound joined together in an organic synthesis”. Le Corbusier would take on the sole task of developing the interior of the vessel, leaving the exterior design of the pavilion to the responsibility of his protégé designer Iannis Xenakis, whom was also trained as an experimental composer and thusly would also create the transitional music that guided you into the formal space of organized sound.
For the composer of the Poem Electronique, Le Corbusier commissioned Edgard Varèse, choosing him over other well know composers of the time such as Benjamin Britten and Aaron Copland, both of whom the Philips company preferred over Varese. Le Corbusier gave minimal input into the details of how the interior of the pavilion would work, instead giving only a vague concept of what the experience should accomplish. The basic guidelines given to both Xenakis and Varese were that the interior was to be shaped in a manner similar to the stomach of a cow, with the form coming from a basic mathematical algorithm. The concept was that audience members would enter in groups of 500 at ten-minute intervals, for two minutes, as the audience filed in through a curved passageway, they would hear Xenakis’s transitional piece before entering a room that would go into darkness, enveloping the audience in a space of light and sound for eight minutes while an accompanying video displayed images along the walls of the pavilion. At the end of the eight-minute piece, the spectators would exit, digested, through another exit while the next group filed in.
[ Continue ]