In the large image, particles in a pile of graphite powder erupt due to illumination with a red laser. The laser heats particles just below the surface the most, causing surface particles to jump up due to photophoresis and the solid state greenhouse effect. The inset is an eruption of vitreous carbon. The images are long exposures, and the laser was slowly moved to excite different locations. Photo credit: Gerhard Wurm and Oliver Krauss. Citation: Scientists pin down causes of dust eruptions (2006, April 18) retrieved 18 August 2019 from https://phys.org/news/2006-04-scientists-pin-eruptions.html When the physicists turned the laser off, they observed an intriguing effect. The point where the temperature gradient changes (the highest temperature) moves deeper into the dust bed. In the tenths of seconds after the laser is turned off, the photophoretic force increases further below the surface, causing larger aggregates to be ejected from the upper part of the bed.“When you turn off the laser, the normal cooling begins,” explained Wurm. “And since the temperature gradient at the surface is largest in absolute terms, heat flows in this direction better, which is why the maximum has to shift further into the sample.”Because photophoresis works best in low-pressure environments (10 mbar used in this experiment), it would be rare to observe the force naturally acting on dust particles near the surface of the Earth. However, in the early days of Earth – as well as other planets and stars – photophoretic ejection at sub-mbar pressures likely played a role in the growth of gas-dust disks, which in turn triggered the formation of asteroids and Kuiper belt objects.For future applications, the physicists theorize that Mars’ low surface pressure make the planet a candidate to host the photophoretic force. For example, with the equipment used on Mars exploration missions, photophoretic technology could aid in the removal of dust from solar panels and lenses. Further, the scientists consider creating a solar sail that would be powered by the photophoretic force instead of radiation pressure.“You could construct a fabric which would look, for example, like a fisher-net with micron or sub-micron-sized fibers,” explained Wurm. “The individual fibers would have ‘negative photophoresis,’ which occurs when particles are pulled by the light after being ejected, and the whole net should be lifted by light. With negative photophoresis, I’d guess a sail might carry a few times its own weight just by ‘passive’ sunlight. . . Say a 10 meter by 10 meter sail might carry a few tens of kilograms.”Wurm and Krauss also speculate on the possibility of fabricating an artificial surface that would optimize photophoretic forces on Earth, as well as industrial applications. Because all these possibilities are based on studies of “dirt,” these experiments take advantage of something often considered an everyday nuisance.“With modern physics, it is hard to come by the effects we observed here because everyone is proud of working in a clean environment at ‘perfect’ vacuum,” said Wurm. “This is fantastic, but you never see photophoretic effects there. You need the gas, the ‘bad’ vacuum, and you need the dirty surfaces.“With respect to planet formation, dust really holds the clues to our origins. The word ‘dust’ implies rather negative feelings because it is related to dirt in everyday life. Dust is everywhere. We will never love it and we can’t leave it. You could call it micro- or even nanoscience and it might sound a little better and fancier for research – but we’re still talking about dust, whatever name tag you put on it.”Citation: Wurm, Gerhard and Krauss, Oliver. Dust Eruptions by Photophoresis and Solid State Greenhouse Effects. Physical Review Letters 96, 134301 (2006).By Lisa Zyga, Copyright 2006 PhysOrg.com By simple light and heat mechanisms, dust particles seem to defy gravity and leap up into the air. The effect, which once played a role in the formation of the Earth and asteroids, could also have applications in dust removal and even propel small probes on Mars. When shining a red laser beam on a pile of dust, some dust particles will jump up, apparently erupting in a fountain of dust strands (see image). In studying the mechanisms behind the erupting dust, scientists Gerhard Wurm and Oliver Krauss from the University of Munster found two causes working together that explain their observations: photophoresis and the solid state greenhouse effect.Photophoresis – or the movement of particles due to light – is based on a long-known effect called thermophoresis – or the movement of particles due to heat transfer. Essentially, in environments with temperature gradients, particles will migrate from hotter to cooler regions due to the thermophoretic force. When light absorption serves as the heat source, the mechanism is called the photophoretic force.In addition to the presence of a temperature-gradient surface, Wurm and Krauss found that the solid state greenhouse effect also plays a role in dust eruptions. This greenhouse effect occurs because the laser beam heats up dust particles slightly below the surface (at least 100 micrometers, which encompasses several tens of particle layers) the most. In a recent Physical Review Letters, the scientists describe how coupling photophoresis with this greenhouse effect means that surface dust particles will strive to migrate away from hot underlying particles – and that direction is up. The team found that the pull-off force for a spherical micron-size particle is around 10-7 N. On average, about a million particles are needed to overcome cohesion.“We observed particles jump up to 5 cm,” Wurm told PhysOrg.com. “You should get them to 10 cm but this might not be the limit. The limit probably depends strongly on the dust powder, its size distribution, cohesion and the light source.”With 50 mW laser power, radiation can penetrate a dust bed to a depth up to a few millimeters. While the temperature generally decreases deeper into the dust bed, the temperature actually peaks not at the surface, but around a depth of 100 micrometers. This reversed temperature gradient near the surface causes aggregates of dust grains to be ejected. Explore further This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only. Hard soil, big jumps and epiphanies: what it’s like on the Moon
All individuals, whether they have religious or secular upbringings, have a chance of defecting. Rowthorn explained that the rates of defection from religious to secular and from secular to religious preferences depend on time and place.“Amongst Christian Churches in Europe and North America, defection rates are higher than conversion rates,” he said. “In some cases, such as the Amish, these losses are greatly outweighed by their very high fertility. However, for mainstream Churches, such as the Catholics or Anglicans, the birth rate is not high enough on its own to offset defections and they rely on immigration to maintain their numbers. In certain other parts of the world, such as East Asia, mainstream Christian Churches are growing through conversion.”Rowthorn’s model shows that, even when the religious defection rate is high, the overall high fertility rate of religious people will cause the religiosity allele to eventually predominate the global society. The model shows that the wide gap in fertility rates could have a significant genetic effect in just a few generations. The model predicts that the religious fraction of the population will eventually stabilize at less than 100%, and there will remain a possibly large percentage of secular individuals. But nearly all of the secular population will still carry the religious allele, since high defection rates will spread the religious allele to secular society when defectors have children with a secular partner. Overall, nearly all of the population will have a genetic predisposition toward religion, although some or many of these individuals will lead secular lives, Rowthorn concluded.“The rate at which religious people abandon their faith affects the eventual share of the population who are religious,” Rowthorn said. “However, it does not alter the conclusion of the article that the religiosity allele will eventually take over. If the defection rate is high, there will be lots of children who are brought up as religious and carry the religiosity allele, but who give up their faith. Such people will carry the religiosity allele into the secular population with them. Many of their descendents will also carry this allele and be secular. In this case, the high fertility group is constantly sending migrants into the low-fertility secular population. Such migrations will simultaneously boost the size of the secular population and transform its genetic composition.”Rowthorn acknowledges that he can only speculate on how a genetic predisposition toward religion may manifest itself in a secular context. Previous research has suggested that a genetic predisposition toward religion is tied to a variety of characteristics such as conservatism, obedience to authority, and the inclination to follow rituals. In this instance of evolution, it’s possible that these characteristics may become widespread not for their own fitness but by hitching a ride with a high-fitness cultural practice. Study: Religious belief declines in Britain Rowthorn has developed a model that shows that the genetic components that predispose a person toward religion are currently “hitchhiking” on the back of the religious cultural practice of high fertility rates. Even if some of the people who are born to religious parents defect from religion and become secular, the religious genes they carry (which encompass other personality traits, such as obedience and conservativism) will still spread throughout society, according to the model’s numerical simulations.“Provided the fertility of religious people remains on average higher than that of secular people, the genes that predispose people towards religion will spread,” Rowthorn told PhysOrg.com. “The bigger the fertility differential between religious and secular people, the faster this genetic transformation will occur. This does not mean that everyone will become religious. Genes are not destiny. Many people who are genetically predisposed towards religion may in fact lead secular lives because of the cultural influences they have been exposed to.”The model’s assumptions are based on data from previous research. Studies have shown that, even controlling for income and education, people who are more religious have more children, on average, than people who are secular (defined here as having a religious indifference). According to the World Values Survey for 82 countries, adults attending religious services more than once per week averaged 2.5 children, those attending once per month averaged 2.01 children, and those never attending averaged 1.67 children. The more orthodox the religious sect, the higher the fertility rate, with sects such as the Amish, the Hutterites, and Haredi having up to four times as many children as the secular average. Studies have found that the high fertility rates stem from cultural and social influences by religious organizations rather than biological factors.But while fertility is determined by culture, an individual’s predisposition toward religion is likely to be influenced by genetics, in addition to their upbringing. In the model, Rowthorn uses a “religiosity gene” to represent the various genetic factors that combine to genetically predispose a person toward religion, whether remaining religious from youth or converting to religion from a secular upbringing. On the flip side, the nonreligiosity allele of this “gene” makes a person more likely to remain or become secular. If both parents have the religiosity allele, their children are also more likely to have the religiosity allele than if one or both parents did not have it. However, children born to religious parents may have the nonreligiosity allele, while children born to secular parents may have the religiosity allele. Having the religiosity allele does not make a person religious, but it makes a person more likely to have characteristics that make them religiously inclined; the converse is also true. Citation: Model predicts ‘religiosity gene’ will dominate society (2011, January 28) retrieved 18 August 2019 from https://phys.org/news/2011-01-religiosity-gene-dominate-society.html Explore further Copyright 2010 PhysOrg.com. All rights reserved. This material may not be published, broadcast, rewritten or redistributed in whole or part without the express written permission of PhysOrg.com. More information: Robert Rowthorn. “Religion, fertility and genes: a dual inheritance model.” Proceedings of the Royal Society B. DOI:10.1098/rspb.2010.2504 A variety of religious symbols. A new study has investigated how the differing fertility rates between religious and secular individuals might affect the genetic evolution of society overall. Image credit: Wikimedia Commons. (PhysOrg.com) — In the past 20 years, the Amish population in the US has doubled, increasing from 123,000 in 1991 to 249,000 in 2010. The huge growth stems almost entirely from the religious culture’s high fertility rate, which is about 6 children per woman, on average. At this rate, the Amish population will reach 7 million by 2100 and 44 million by 2150. On the other hand, the growth may not continue if future generations of Amish choose to defect from the religion and if secular influences reduce the birth rate. In a new study, Robert Rowthorn, emeritus professor of economics at Cambridge University, has looked at the broader picture underlying this particular example: how will the high fertility rates of religious people throughout the world affect the future of human genetic evolution, and therefore the biological makeup of society? This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.
Most of the web sites out there seem stumped as to how these guys pulled off this little trick. Fortunately, the researchers explain it in detail. And even more fortunately, it’s not that hard to understand.First a thin sapphire wafer is created. It is then coated with a very thin ceramic layer of yttrium barium copper oxide which becomes a superconductor (materials that conduct electricity with no loss of energy) at very cold temperatures. The result is a frozen disc. When it is placed over a magnet, the superconductor material and magnet repel one another due to the Meissner effect (the expulsion of the magnetic field from a material when it goes into a superconducting state). But, because the layer of superconducting material is so thin, some of the magnetic force is allowed through at certain particularly weak points. These paths through are called flux tubes, and they are the real secret to the whole trick. Because there are many of them they cause a three dimensional holding or locking effect, which is what viewers see when watching the video.Upon viewing the video, a lot of commentators refer to the scene in Back to the Future 2 when Mary McFly rides a hoverboard for a few minutes. Unfortunately, the science demonstrated in this latest video holds no hope for that; not unless someone figures out how to keep such boards frozen indefinitely (and embeds magnets everywhere) or better yet, figures out a way to make a superconductor that works at room temperature. This is nothing new of course, everyone’s seen it in science class. What is new is that when the demonstrator turns the disc, it stays hovered at that angle. This is in contrast to the wobbling we’re used to in such demonstrations. Next, the disc is set over a different surface where it is made to spin. But that’s only the beginning. The disc is then set on a track where it zips around in midair. And again, it can be made to do so at whatever angle is desired. Then, the track is turned upside down and the disc hovers below it, again zipping around. © 2011 PhysOrg.com Explore further Superconducting magnet generates world’s highest magnetic field at 24T (PhysOrg.com) — A video created by researchers at Tel Aviv University in Israel has the Internet buzzing. Though rather simple, it just looks really cool, hence all the attention. It’s a demonstration of quantum locking, though to non-science buffs, it looks more like science fiction come to life. In the video a disc, obviously frozen due to the vapor rising from its surface hovers over a surface. Citation: Quantum levitating (locking) video goes viral (2011, October 19) retrieved 18 August 2019 from https://phys.org/news/2011-10-quantum-levitating-video-viral.html This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.
Copyright 2013 Phys.org All rights reserved. This material may not be published, broadcast, rewritten or redistributed in whole or part without the express written permission of Phys.org. Journal information: Journal of Micromechanics and Microengineering (Phys.org)—For many electronic devices, colder is better. At low temperatures, electronic devices such as sensors and detectors operate with a higher efficiency and better overall performance than they do at room temperature. And superconducting devices, known for their zero electrical resistance, require extremely cold temperatures to operate. But in order to make cryogenic electronics more widespread, micro-sized cryogenic coolers need to become cheaper and more reliable. Addressing this challenge, scientists have designed and fabricated a micro-sized cryocooler that cools devices down to 30 K (-243 °C, -406 °F) in about an hour, and has a simple design that lends itself to high-yield fabrication. The researchers, Haishan Cao from the University of Twente in Enschede, The Netherlands, and coauthors from Kryoz Technologies and Micronit Microfluidics, both in Enschede, as well as from the University of Twente, have published a paper on the new micro-sized cryogenic cooler in a recent issue of the Journal of Micromechanics and Microengineering.The new cryocooler is a micro-sized version of a Joule-Thomson (JT) cryocooler, which cools by causing a high-pressure gas to expand as it flows from a high-pressure region to a low-pressure region. As James Joule and William Thomson discovered in 1852, a gas that expands in this way under certain conditions will cool down, a finding now known as the Joule-Thomson effect. In the new study, the micro-sized cryocooler uses two stages to cool a device. The first stage involves a single-stage cryocooler device that other researchers at the University of Twente previously designed, which cools down to 100 K (-173 °C, -180 °F) using nitrogen as the gas. In the second stage, which cools down to 30 K, the researchers used hydrogen as the gas. The reason for using two different gases is that the Joule-Thomson effect only works (i.e., produces a cooling effect) if the expanding gas is already cooled below a certain temperature called its inversion temperature. This critical temperature is different for different gases. If the gas is above this temperature, then expansion will cause the gas to warm up rather than cool down. Nitrogen has a higher inversion temperature than hydrogen, which is why the researchers used nitrogen in the pre-cooling stage, and then cooled the hydrogen through heat-exchange with the nitrogen until the hydrogen surpassed its inversion temperature of 205 K (-68 °C, -91 °F) so that it could be cooled by the second Joule-Thomson process.”30 K is sufficiently cold to cool most electronic devices such as infrared detectors, low-noise amplifiers and high-temperature superconducting devices,” Cao told Phys.org. “To cool superconducting devices based on Nb3Sn or NbTi, an even lower temperature is required. To reach even lower temperature using the Joule-Thomson effect, a helium stage is needed, in which the hydrogen stage works as a precooler for the helium stage.” (Bottom left) Photograph of the microcooler mounted into a vacuum flange and surrounded by a PCB, with gas connections. (Top right) The microcooler shown next to a euro coin for size comparison. Credit: H. S. Cao, et al. ©2013 IOP Publishing Ltd More information: H. S. Cao, et al. “Micromachined cryogenic cooler for cooling electronic devices down to 30 K.” J. Micromech. Microeng. 23 (2013) 025014 (6pp). DOI: 10.1088/0960-1317/23/2/025014 To demonstrate the feasibility of the new two-stage cryocooler, the researchers attached a YBCO film to the device to be cooled to its superconducting state. Starting from room temperature, the scientists showed that the nitrogen stage could cool the film to 94 K in about 20 minutes, and the hydrogen stage could cool the film to 30 K in an additional 40 minutes. During this cool-down process, the film reached its superconducting state, demonstrating the possibility for integrating the cryocooler with electronic devices.One of the biggest advantages of the micro-sized cryocooler is that it has the potential be fabricated at low cost on a large scale. The cooler consists of just three glass wafers ranging in thickness from 145 to 400 μm, which are etched, stacked and bonded together. A stack of wafers can also be cut into multiple microcoolers using a dicing and powder blasting process. Another benefit is that the cryocooler operates at modest pressures, ranging from 0.1 MPa (room pressure) for the low-pressure region and 8.0-8.5 MPa for the high-pressure regions. When the pressure is higher, fabrication becomes more complex, in particular the bonding process. “A 30 K micro cryocooler is the coldest micro cryocooler that has been published in academic journals,” Cao said. “W. Little previously presented a seven-wafer stack two-stage JT micro cryocooler operating at 14 MPa with a cold-end temperature of about 30 K. Our two-stage 30 K JT micro cryocooler is operated with modest pressures and realized in a stack of only three wafers. Compared to three-wafer stack micro cryocoolers, seven-wafer stack micro cryocoolers are much more difficult to fabricate with acceptable yields. Furthermore, high gas pressures add more stringent requirements to the bonding process and severely add complexity to the development of a compressor for closed-cycle operation of the cryocooler.”The researchers hope that the relatively simple device requirements and the ability to cool to cryogenic temperatures (defined as temperatures below about 120 K [-153 °C, -244 °F]) will expedite cryogenic electronics applications. Such electronic and superconducting sensors and detectors could be especially useful in medical and space applications.”In medicine, cryogenic electronic devices can be used in some diagnostic techniques, for example, tracing the spread of breast cancer cells,” Cao said. “To determine whether the cancer has spread, the traditional way is to introduce radioactive particles into the body. But the radioactivity represents a health risk for both patient and medical personnel. Another method is using magnetic particles. However, existing detectors are not sensitive enough to measure the signal of the particles. The signal-to-noise ratio of the detectors could be greatly improved by cooling. Another application could be micro cryosurgery with miniaturized cold tips (for removing cancerous tissue).”Applications of cryogenic electronic devices in space include scientific instrumentation, telecommunications and earth observation, and meteorology satellite. Many science missions in space use optical detectors that operate at cryogenic temperatures to increase their sensitivity. Radio-frequency devices (filters, delay lines, resonators and antennas) based on high-temperature superconductors have high potential to improve the energy efficiency of telecommunications systems or to reduce their size and weight. Earth observation missions require cryogenics because of the utilization of medium infrared detectors (typically operating around or just below 100 K) to image the earth surface. “For these applications, the cooler should be small, low cost, low interference and have a very long lifetime. Joule-Thomson micro cryocoolers are excellent for this, because these have no cold moving parts and therefore can be scaled down to match the size and the power consumption of these devices.”In the future, the researchers plan to experiment with using gas mixtures in order to achieve lower temperatures at lower pressures.”One of the future goals is to combine the micro cryocooler with a sorption compressor to realize a total micro system,” Cao said. “Other future research topics in this area include three-stage microcryocoolers, micro cryocoolers using double expansion cycles and the use of a gas mixture as the working fluid. To reach temperatures near the boiling point of helium gas, it is necessary to use three stages. A nitrogen stage is used to precool a hydrogen stage, which works as a precool for a helium stage. Using a double expansion cycle, a Joule-Thomson cooler can achieve a lower temperature due to the reduction of the pressure drop in the low-pressure line. Compared to pure gas, mixed gases provide equivalent cooling power with significantly lower pressure ratio.” Explore further Faster and more sensitive electronics thanks to compact cooling Citation: Electronics like it cold, and 30 K cryocooler delivers (2013, January 18) retrieved 18 August 2019 from https://phys.org/news/2013-01-electronics-cold-cryocooler.html This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.
(Phys.org) —A fundamental question in the evolution of animal body plans, is where did the head come from? In animals with a clear axis of right-left symmetry, the bilaterians, the head is where the brain is, at the anterior pole of the body. Little is known about the possible ancestor of bilaterians. Fortunately their sister group from that same progenitor, the cnidarians, can be studied in parallel today to give some clues. Cnidarians are creatures like jellyfish, hydra, and sea anemone which possess rudimentary nerve nets, but no clear brain. They all have just a single orifice to the external world, which basically does it all. In a recent paper published in PLOS Biology, researchers from the University of Bergen in Norway compared gene expression patterns in sea anemone (Nematostella vectensis, Nv) with that from a variety of bilaterian animals. They found that the head-forming region of bilaterians is actually derived from the aboral, the opposite-oral, side of the ancestral body plan. Pioneering developmental biologist Lewis Wolpert, is often credited with having observed: “It is not birth, marriage, or death, but gastrulation which is truly the most important time in your life.” Almost all animals undergo a similar gastrulation process early in their development. The point where the cells first invaginate during gastrulation, the blastopore, uniquely defines an embryonic axis. After this stage however, all bets are off—attempts to define phyla according to hardline criteria, like blastopore = anus, are invariably met by counterexample where it instead becomes the mouth. Gene expression, while not always constrained into single contiguous areas, therefore provides a baggage-free way to assign homology across species.Wolpert’s concept of positional information in development has been largely vindicated by the discovery of hox gene codes in a wide variety of animals. While hox genes are the critical regulators of axial patterning, in most bilaterians they are not expressed in the anterior head-forming region. The researchers focused instead on the genes six3 and FoxQ2, transcription factors which have been shown to regulate anterior-posterior development. Six3 knockouts in mice, for example, fail to develop a forebrain. In humans six3 regulates forebrain and eye development.Sea anemone, like Nematosella, are curious creatures. As larvae they swim about with their aboral pole forward. As adults they plunge this region into the sea floor, and permanently anchor themselves in. Their bodies then undergo various changes but their oral pole remains intact for feeding. By using knockdown and rescue experiments in Nemostella, the researchers were able to show that six3 is required for the development of the aboral region, and the expression of further regulatory genes. This suggests that the region distal from the cnidarian mouth parallels development of the bilaterian head. The researchers also looked at the expression of the forkhead domain protein foxQ2, which functions downstream of six3. Forkhead box genes are an important class of transcription factors which frequently lack the signature homeodomains and zinc-finger regions common to other transcription factors. Instead they have a unique DNA-binding region that has the shape of a winged helix. The forkhead gene, fox2p, in humans has recently garnered a lot of media attention for its apparent role in neural development, and in even more esoteric functions like speech development. FoxQ2 is known to be a well-conserved marker for the most anterior tip of a variety of bilaterians including sea urchines, drosophila, and cephalochordates. The researchers established that before gastrulation in cnidarians, foxQ2a was expressed in the aboral pole, and in a small number cells resembling neurons. Afterwards the expression of this “ring gene” was excluded from a central spot. In conclusion, the expression of genes for anemone head development, away from the mouth region, suggests that head development came first and was a separate event from mouth development. Secondarily, the head and a coalescing brain appear to have merged to become a centralized control center. Where does our head come from? Explore further This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only. © 2013 Phys.org Journal information: PLoS Biology Citation: Which came first the head or the brain? (2013, March 28) retrieved 18 August 2019 from https://phys.org/news/2013-03-brain_1.html More information: Sinigaglia C, Busengdal H, Leclère L, Technau U, Rentzsch F (2013) The Bilaterian Head Patterning Gene six3/6 Controls Aboral Domain Development in a Cnidarian. PLoS Biol 11(2): e1001488. doi:10.1371/journal.pbio.1001488AbstractThe origin of the bilaterian head is a fundamental question for the evolution of animal body plans. The head of bilaterians develops at the anterior end of their primary body axis and is the site where the brain is located. Cnidarians, the sister group to bilaterians, lack brain-like structures and it is not clear whether the oral, the aboral, or none of the ends of the cnidarian primary body axis corresponds to the anterior domain of bilaterians. In order to understand the evolutionary origin of head development, we analysed the function of conserved genetic regulators of bilaterian anterior development in the sea anemone Nematostella vectensis. We show that orthologs of the bilaterian anterior developmental genes six3/6, foxQ2, and irx have dynamic expression patterns in the aboral region of Nematostella. Functional analyses reveal that NvSix3/6 acts upstream of NvFoxQ2a as a key regulator of the development of a broad aboral territory in Nematostella. NvSix3/6 initiates an autoregulatory feedback loop involving positive and negative regulators of FGF signalling, which subsequently results in the downregulation of NvSix3/6 and NvFoxQ2a in a small domain at the aboral pole, from which the apical organ develops. We show that signalling by NvFGFa1 is specifically required for the development of the apical organ, whereas NvSix3/6 has an earlier and broader function in the specification of the aboral territory. Our functional and gene expression data suggest that the head-forming region of bilaterians is derived from the aboral domain of the cnidarian-bilaterian ancestor.Synopsys: www.plosbiology.org/article/in … journal.pbio.1001484
Comparison of unweathered (left) and weathered Ordovician limestone at a roadcut on the State College Bypass, U.S. Route 322. Credit: Wikipedia. Meteorite study suggests Mars’ ancient atmosphere may be locked in its rocky terrain Scientists believe that the Earth has experienced many episodes of global glaciation—where the entire planet is covered in ice, resulting in what is loosely termed, a “Snowball Earth.” To better understand climate change heading into the future, scientists look to the past. In this latest effort, the research team looked at an event known as the Sturtian glaciation—after a billion years with no ice on the planet at all, suddenly, the Earth was covered with the stuff for 55 million years. Until now, why this happened has been a mystery.To find out more, the team traveled to the Mackenzie Mountains—a part of the planet that has been found to be useful for plotting the past due to its glacial history. The researchers collected sedimentary rocks left by glacial movement along with rock samples found above and below them.The rock samples were all taken back to the lab where the researchers tested them for osmium and rhenium levels—the latter breaks down to the former over time, offering a way to determine the age of the rocks. Using this technique, the researchers were able to conclude that the Sturtian lasted for approximately 55 million years. The team also tested the rock samples for isotopes of the same two elements and that led to the discovery that carbon dioxide had been sequestered in them. This led to the development of a theory that suggests that volcanic activity prior to the Sturtian also led to the absorption of so much carbon dioxide (weathering opens up rock causing it to be more absorbent) from the atmosphere that the planet cooled until it eventually became a Snowball Earth.Moving forward, the question of whether the Sturtian was truly one long event, or if it was actually a time of many glacial increases and retreats will be further studied to better understand the mechanism behind such global extremes. Journal information: Proceedings of the National Academy of Sciences This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only. © 2013 Phys.org More information: Re-Os geochronology and coupled Os-Sr isotope constraints on the Sturtian snowball Earth, PNAS, Published online before print December 16, 2013, DOI: 10.1073/pnas.1317266110 AbstractAfter nearly a billion years with no evidence for glaciation, ice advanced to equatorial latitudes at least twice between 717 and 635 Mya. Although the initiation mechanism of these Neoproterozoic Snowball Earth events has remained a mystery, the broad synchronicity of rifting of the supercontinent Rodinia, the emplacement of large igneous provinces at low latitude, and the onset of the Sturtian glaciation has suggested a tectonic forcing. We present unique Re-Os geochronology and high-resolution Os and Sr isotope profiles bracketing Sturtian-age glacial deposits of the Rapitan Group in northwest Canada. Coupled with existing U-Pb dates, the postglacial Re-Os date of 662.4 ± 3.9 Mya represents direct geochronological constraints for both the onset and demise of a Cryogenian glaciation from the same continental margin and suggests a 55-My duration of the Sturtian glacial epoch. The Os and Sr isotope data allow us to assess the relative weathering input of old radiogenic crust and more juvenile, mantle-derived substrate. The preglacial isotopic signals are consistent with an enhanced contribution of juvenile material to the oceans and glacial initiation through enhanced global weatherability. In contrast, postglacial strata feature radiogenic Os and Sr isotope compositions indicative of extensive glacial scouring of the continents and intense silicate weathering in a post–Snowball Earth hothouse. (Phys.org) —A team of researchers conducting a field study in the Mackenzie Mountains in northwest Canada is suggesting rock weathering almost a billion years ago, may have led to the entire planet being encased in ice for 55 million years. In their paper published in Proceedings of the National Academy of Sciences, the multi-national team describes their field study and subsequent analysis of rock samples they retrieved and how doing so led to what they believe is an explanation of one of the most dramatic instances of climate change on record. Explore further Citation: Rock weathering may have led to ‘Snowball Earth’ (2013, December 17) retrieved 18 August 2019 from https://phys.org/news/2013-12-weathering-snowball-earth.html
Explore further This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only. Scientists have been investigating the possibility of oceans on Mars for several years, but have so far been unable to prove they existed. Also, other researchers have found evidence for tsunamis on Mars but have not been able to find an associated oceanic impact crater to go along with it. In this new effort, the researchers believe they have found both.Prior research uncovered what has been described as thumbprint-looking terrain on the surface of Mars, which some researchers have ascribed to mud moving downhill from volcanoes or being pushed by glaciers. But they might have been created by a very large tsunami, the researchers suggest, and they have found a crater that they believe might have been the cause of it. Lomonosov crater, they suggest, situated in the northern plains, could very well be the scar that was left as a reminder of an asteroid striking in a northern ocean, generating waves hundreds of feet high, eventually spilling onto land and leaving enormous deposits behind. If such an asteroid did strike the ocean, the team continues, after diving through the water, it would have created a crater on the ocean floor. That crater would have been a void that would be suddenly filled with water from all sides, smashing together, creating a secondary tsunami following behind the first. As the first tsunami was receding over land, the second tsunami would have struck, and it was those two acting together that the researchers believe caused the characteristic thumbprint ridges to come about. They have used numerical modeling of wave propagation to back up their claims.The researchers contend that no other reasonable explanation exists for the creation of the ridges, which, they suggest, offers a degree of evidence of not just a tsunami but an ocean on Mars. Journal information: Journal of Geophysical Research Ancient tsunami evidence on Mars reveals life potential © 2017 Phys.org More information: Francois Costard et al. Modeling tsunami propagation and the emplacement of thumbprint terrain in an early Mars ocean, Journal of Geophysical Research: Planets (2017). DOI: 10.1002/2016JE005230AbstractThe identification of lobate debris deposits in Arabia Terra, along the proposed paleoshoreline of a former northern ocean, has renewed questions about the existence and stability of ocean-sized body of water in the early geologic history of Mars. The potential occurrence of impact-generated tsunamis in a northern ocean was investigated by comparing the geomorphologic characteristics of the Martian deposits with the predictions of well-validated terrestrial models (scaled to Mars) of tsunami wave height, propagation direction, runup elevation, and distance for three potential sea levels. Our modeling suggests several potential impact craters ~30–50 km in diameter as the source of the tsunami events. Within the complex topography of flat-floored valleys and plateaus along the dichotomy boundary, the interference of the multiple reflected and refracted waves that are observed in the simulation may explain the origin of the arcuate pattern that characterizes the thumbprint terrain. Credit: NASA Citation: Evidence of giant tsunami on Mars suggests an early ocean (2017, March 27) retrieved 18 August 2019 from https://phys.org/news/2017-03-evidence-giant-tsunami-mars-early.html (Phys.org)—A team of researchers with members from France, Italy and the U.S. has found what they believe is evidence of a giant tsunami occurring on Mars approximately 3 billion years ago due to an asteroid plunging into an ocean. In their paper published in the Journal of Geophysical Research, the group outlines the evidence and why they believe a tsunami is the most likely factor that led to the creation of some unique planetary formations.
© 2018 Phys.org Illustration of electrogates. Insets show a close-up of the area surrounding the trench. Credit: IBM Research-Zurich Journal information: Applied Physics Letters Flow apparatus samples up to 1500 chemical reactions a day Each electrogate consists of a trench etched into the bottom surface of the microchannel, with one electrode patterned over the trench and a second electrode patterned a short distance in front of the trench. When a liquid sample flows along the microchannel in the absence of a voltage, it stops at the trench because the abrupt change in the contact angle creates a pinning force on the liquid. A small voltage (<10 volts) applied between the two electrodes pulls down ions from the liquid to the edge of the trench where the liquid is pinned, which makes this area more wettable. As a consequence, the contact angle of the liquid in this area decreases, causing the liquid to resume flowing across the trench and through the microchannel. The researchers demonstrated that the curvature of the trench determines the reliability and retention time of the electrogates. With a large curvature, they could achieve 100% reliability, start and stop times of less than a second, and retention times exceeding 5 minutes, which can be extended to beyond 45 minutes with additional strategies. The electrogates also work with various types of liquids, including human serum.Among its advantages, the electrogates are easy to fabricate, have long-term stability, are biocompatible, and can be implemented in multiple locations on the same chip. The researchers expect that the electrogates can be easily implemented into low-power, portable microfluidics devices in the future."We are supported by a grant from the EU, and we still have a little bit of time to 'push' electrogates further," Delamarche said. "One task (nearly complete) is to vary the options for fabricating electrogates so that technologists have more freedom to design and fabricate them. This can help spread the concept, we think. Then, we will show specific examples where combining a few electrogates can create more advanced functions for microfluidic systems." Although microfluidics devices have a wide variety of uses, from point-of-care diagnostics to environmental analysis, one major limitation is that they cannot be modified for different uses on the fly, since their flow paths are set during fabrication. In a new study, researchers have addressed this limitation by designing electrogates that can regulate the flow of liquid at different points along the microchannel—a process that can be entirely controlled with a smartphone. The researchers, Y. Arango, Y. Temiz, O. Gӧkçe, and E. Delamarche, at IBM Research-Zurich in Rüschlikon, Switzerland, have published a paper on electrogates in a recent issue of Applied Physics Letters."Point-of-care diagnostics represent a very segmented market," Delamarche told Phys.org. "For each type of test, a microfluidic device needs to be designed and fabricated to ensure optimal assay performances (volume of sample passing through the device, flow rates, time given for the reactions to take place, time given for dissolving some reagents in the chip with the sample, etc.). This is a bit frustrating, and with silicon microtechnology, it is always beneficial to cover as many applications as possible without too much redesign and changes in the manufacturing processes. "This is where electrogates help, and this is what motivated us to invent them. The idea is to make chips much more generic and transfer some of the routing and timing of the flow to a software level, i.e., a protocol uploaded on a smartphone or tablet. Changing protocols on a software level is easy, fast, flexible and convenient."Rather than using mechanical elements such as pumps and valves to control the flow, the electrogates are based on electrowetting. This process involves applying an electric voltage to control the wetting properties of the surface, which in turn controls the flow of the liquid. Explore further The researchers in their lab. Credit: IBM Research-Zurich More information: Y. Arango, Y. Temiz, O. Gӧkçe, and E. Delamarche. “Electrogates for stop-and-go control of liquid flow in microfluidics.” Applied Physics Letters. DOI: 10.1063/1.5019469 Citation: Electrogates offer stop-and-go control in microfluidics (2018, April 24) retrieved 18 August 2019 from https://phys.org/news/2018-04-electrogates-stop-and-go-microfluidics.html This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.
“Research suggests that fake news spreads faster and deeper than the truth, so combatting disinformation after the fact can be like fighting a losing battle,” said Sander van der Linden, the CDSMLab’s director. Read the whole story: The New York Times Results from the study of 15,000 users of the “Bad News” game, launched last year by the university’s Cambridge Social Decision-Making Lab (CDSMLab), showed it was possible to train the public to be better at spotting propaganda. An online game that allows people to deploy Twitter bots, photo-shop evidence and incite conspiracy theories has proven effective at raising their awareness of “fake news”, a study from the University of Cambridge has found.
A stunning goal from Nicolas Anelka helped Mumbai City FC to pip Kerala Blasters 1-0, in their Hero Indian Super League (ISL) match at the D.Y. Patil stadium, here on Sunday.A foul from Ishfaq Ahmed on Jan Stohanzl earned the hosts a free kick in the 44th minute and Anelka was quick to capitalise. The former Chelsea star’s strike curled above the defensive wall to the left post with a helpless goalkeeper, Sandip Nandy, diving in a desperate attempt. Also Read – Khel Ratna for Deepa and Bajrang, Arjuna for JadejaAnelka, who had missed the first three matches due to suspension, showed his class and also earned the much needed three points for his side.The hosts looked like a new side in the second half, rejuvenated by their one goal lead. They attacked well, seemed eager to add a few more goals to their tally and came close to achieving it a few times.Mumbai had the chance to double the lead in the 58th minute when Stohanzl skied the shot and frittered away the golden opportunity. Also Read – Endeavour is to facilitate smooth transition: ShastriIndian forward Subash Singh was also guilty of missing a sitter in the 74th minute. Anelka did everything right to beat the three defenders and feed the ball to Singh who shot it over the cross bar.Anelka seemed desperate to make up for the lost matches and the French striker came close to adding to his tally by going solo in the 86th minute and the goalkeeper blocked it.Moritz found the ball on the rebound but couldn’t slot it home.The 35-year old had another opportunity a minute later but Moritz squandered the chance inside of the box and went wide.The hosts looked to increase their lead even in the dying moments of the game when Nadong Bhutia hit the top bar, a few seconds before the close of the match.