This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only. (PhysOrg.com) — The expanding spot discovered on Venus last month may not have garnered as much attention as the meteor impact with Jupiter, but its cause is certainly more puzzling. While astronomers are pretty sure that the new spot seen in Jupiter’s landscape is caused by impact, there is evidence that this is not true of the spot seen on Venus. New Scientist reports on why astronomers don’t think the spot of Venus was caused by a meteor:”The spot is bright at ultraviolet wavelengths, which may argue against a meteoroid impact as a cause. That’s because rocky bodies, with the exception of objects very rich in water ice, should cause an impact site to darken at ultraviolet wavelengths as it fills with debris that absorbs such light, says Sanjay Limaye of the University of Wisconsin-Madison and a member of the Venus Express team.”Some of the reasons being advanced for the spot in Venus’ atmosphere include:* Volcanic eruption. (This option is considered unlikely, since the thick atmosphere would likely block most volcanic activity from being visible to us.)* Charged particles from solar interaction with Venus’ atmosphere.* Atmospheric turbulence concentrating bright material in a confined area.The other interesting point about the Venus bright spot is that it — like the Jupiter “scar” — was first noticed by an amateur astronomer. The fact that astronomy is so accessible to a wide range of people is interesting in terms of encouraging interest in the sciences.© 2009 PhysOrg.com Astronomers are trying to determine the cause of an expanding spot seen in Venus’ atmosphere. Explore further The light and dark of Venus Citation: Expanding Spot on Venus Puzzles Astronomers (2009, August 4) retrieved 18 August 2019 from https://phys.org/news/2009-08-venus-puzzles-astronomers.html
The researchers, from the Korea Electronics Technology Institute and Sungkyunkwan University, both in Gyeonggi-do, Korea; the Ulsan National Institute of Science and Technology in Ulsan, Korea; and Korea University in Seoul, Korea, have published their study in a recent issue of Nano Letters.“We demonstrated how a self-assembly-mediated process can be applied to fabricate graphene micro-patterns on flexible substrates,” Professor Kwang Suh of Korea University told PhysOrg.com. “This process provides a scalable and compatible methodology for the large-scale and roll-to-roll production of graphene patterns.” The goal of the research was to produce highly ordered patterns of PMMA polymer solution (also known as Plexiglas when in solid form) onto a single-layer graphene film prepared on a flexible substrate. The PMMA protects specific regions of the graphene while the graphene is etched by a plasma treatment. After the PMMA is washed off, graphene patterns appear etched in regions that were not covered by PMMA.To pattern the PMMA polymer solution onto the graphene, the researchers positioned a roller on top of the graphene, and the roller was pushed by an upper motorized plate at a defined speed. When the researchers loaded the PMMA solution in a confined space formed between the roller and graphene surface, the PMMA solution edge (i.e., contact line) undergoes continuous stick-slip motion due to the competition between the pinning and capillary forces. As a result, periodically striped PMMA patterns are formed on the graphene surface over large areas. This method produced PMMA stripes with nearly equivalent spacing and a width of about 18 micrometers. By rotating the graphene film 90°, the researchers could also fabricate cross-striped patterns.“Our approach is not only inexpensive but also has diverse applicability, as it can run on either flexible or rigid substrates; and it is much simpler than the conventional photo-lithography process,” said Dr. Woo Seok Yan of the Korea Electronics Technology Institute.To investigate the electronic properties of the final graphene patterns, the researchers fabricated flexible graphene-based field-effect transistors based on the patterns. After adding electrodes and an ion-gel gate dielectric, the researchers tested the transistor and found that it exhibits good electron mobility at low voltages. The same technique could be used to fabricate a variety of graphene-based devices.“With the benefits of its simplicity, high-throughput, and scalability to the roll-to-roll processing, this process holds promise for the integration of graphene into practical electronic devices such as field-effect transistors and sensors,” Yan said.Yan and Suh added that they plan to expand the technique to smaller scales.“Extension of this self-assembly process may lead to an even greater variety of complex graphene patterns on the nanometer scale,” Suh said. “We are now concentrating on the high-throughput and roll-to-roll fabrication of nano-architectured graphene patterns based on this technique.” Explore further Stretchable graphene transistors overcome limitations of other materials Citation: Motorized roller could mass-produce graphene-based devices (2012, February 23) retrieved 18 August 2019 from https://phys.org/news/2012-02-motorized-roller-mass-produce-graphene-based-devices.html More information: TaeYoung Kim, et al. “Large-Scale Graphene Micropatterns via Self-Assembly-Mediated Process for Flexible Device Application.” Nano Letters. DOI: 10.1021/nl203691d Journal information: Nano Letters (PhysOrg.com) — Finding a simple, scalable way to pattern graphene for future electronics applications is one of the biggest challenges facing graphene researchers. While lithography has been widely used to create graphene patterns for electronic devices, its multiple processing steps make it too complex for large-scale use. In a recent study, scientists have found that a motorized, movable roller can deposit a polymer solution onto a graphene surface in periodically striped and cross-striped patterns, which they used to make a transistor. By eliminating several steps involved in lithography, the new technique could lead to a low-cost method for producing graphene patterns for a variety of electronic devices on a large scale. A motorized roller stripes PMMA in periodic patterns on a graphene substrate, which is later etched by a plasma treatment to make patterns in the graphene. Image credit: Kim, et al. ©2012 American Chemical Society This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only. Copyright 2012 PhysOrg.com. All rights reserved. This material may not be published, broadcast, rewritten or redistributed in whole or part without the express written permission of PhysOrg.com.
Quantum Entanglement. Two related tasks that require quantum entanglement as a resource. In quantum teleportation, Alice receives a qubit in an unknown state, and destroys it by performing a Bell measurement on that qubit and a member of an entangled pair of qubits that she shares with Bob. She sends a two-bit classical message (her measurement outcome) to Bob, who then performs a unitary transformation on his member of the pair to reconstruct a perfect replica of the unknown state. In superdense coding, Alice receives a two-bit classical message, transmits the message by performing a unitary transformation on a member of an entangled pair that she shares with Bob, and then sends that qubit to Bob. Thus one qubit suffices to carry two classical bits of information. Source: National Science Foundation Workshop on Quantum Information Science Not surprisingly, then, the next steps in the team’s research involves ways of performing weak measurement and reversing measurement for other types of decoherence. “In particular,” Kim tells PhysOrg, “we’re interested in a realistic quantum communication scenario – so we’re hoping to find ways to perform weak measurement and reversing measurement for the types of decoherence often found in fiber and free-space quantum channels.” They’re also working on apply their protocol to solid-state (or superconducting) qubits, and atomic qubits (such as trapped ions), because the particular type of decoherence considered in their study is directly responsible for loss of coherence in such systems. “We’re actively discussing such experimental possibilities,” he adds, “with colleagues working on solid-state physics and atomic physics.”Kim also describes how their findings may make it possible to effectively handle decoherence in quantum information by combining your scheme for protecting entanglement from decoherence with entanglement distillation, a protocol essential in long-distance quantum communications. “Let’s consider a simple quantum communication scenario in which entangled photon pairs are distributed through quantum channels with decoherence,” Kim illustrates. “Alice prepares the entangled qubit pairs and sends one qubit to Bob and the other qubit to Charlie. Ideally, Bob and Charlie share an entangled-bit, or e-bit, which can be used for many quantum information tasks. However, all practical quantum channels cause a certain level of decoherence to the quantum states being distributed through the channel. If the decoherence is too big, initially entangled qubit pairs end up losing the entanglement.” As a result, Bob and Charlie now share two qubits with no entanglement, and which are therefore of no use in quantum information. “However,” Kim explains, “using our protocol, Bob and Charlie can still share two qubits with some amount of entanglement even through a quantum channel with severe decoherence. The amount of entanglement that Bob and Charlie share depends on the strengths of weak and reversing measurements. Bob and Charlie can now repeat the process until they accumulate sufficient numbers of qubit pairs with less-than-maximum entanglement. Now, to produce a single maximally entangled qubit pairs, Bob and Charlie perform an entanglement distillation protocol on multiple pairs of less-than-maximum entanglement.” In other words, by using their protocol and combining it entanglement distillation, Bob and Charlie can share maximally entangled qubit pairs even through quantum channels with strong decoherence.Specifically addressing the study’s impact on quantum information applications, Kim first notes that quantum teleportation requires that two parties share maximally-entangled qubit pairs. “In realistic situations, due to decoherence in real quantum channels, sharing maximally entangled qubit pairs over long distances would not be possible, thus preventing quantum teleportation between two parties with a very large separation. Of course, Kim points out, “these two parties could use quantum memory devices to store the qubits and then move apart, but all quantum memory devices have a certain storage time, after which it is not possible to extract the identical qubit. Furthermore, a practical quantum memory device adds decoherence to the quantum state being stored. Thus, our protocol will allow long-distance quantum teleportation possible even if the quantum channel is not ideal.”Secondly, a quantum computer needs to operate coherently until the results are measured and read out. “In implementing a quantum computer,” notes Kim, “a qubit and/or many entangled qubits must undergo unitary transformations before decoherence affects the qubit states. In other words, each qubit is said to have a certain characteristic decoherence time. If an operation cannot be done within that decoherence time, it becomes meaningless as it no longer represents a unitary transformation. By using our protocol, it should be possible to prolong the effective decoherence time of a qubit, thus making the qubit less vulnerable to external perturbation.”Finally, in quantum cryptography, as in quantum teleportation, it is often essential pure state qubits must be sent from one party to another. “If the channel between Alice and Bob adds decoherence,” says Kim, “even if Alice sends a pure state, Bob doesn’t receive a pure state. If this should happen, the quantum bit error rate, or QBER, of the quantum cryptography system will rise – and if it rises above a certain threshold value, it becomes impossible to generate secure keys. Clearly, our protocol will have a unique position for practical quantum cryptography if a practical quantum channel is to be considered.”More generally, Kim tells PhysOrg, the team’s research paves the way for dealing with decoherence in an active way by utilizing weak measurement and reversing measurement. “The general approach of using a non-projective quantum measurement, also known as a Positive Operator Valued Measure, or POVM” – a measure whose values are non-negative self-adjoint operators on a Hilbert space – “has been shown to have interesting and important practical applications in quantum information. In this work, we demonstrated that such generalized measurements can be used to actively battle decoherence – and I think it should be possible to find other practical applications of this approach,” envisions Kim.“Another aspect I wish to mention is something more fundamental,” Kim concludes. “So far, measurement in quantum optics and quantum information has nearly always meant projective, or von Neumann, measurement. Our work suggests that there could be other interesting and important applications of weak measurement and reversing measurement, not only for quantum information, but also for precision measurement, atomic optics, cavity quantum electrodynamics, mesoscopic physics, and many other areas.” Copyright 2012 PhysOrg.com. All rights reserved. This material may not be published, broadcast, rewritten or redistributed in whole or part without the express written permission of PhysOrg.com. (PhysOrg.com) — Decoherence can be metaphorically seen as a quantum fall from grace: When quantum bits, or qubits, are in superposition – such as a single qubit simultaneously having both 1 and 0 values – they’re said to be in a state of coherence. Any coupling with the environment – whether intentional (as in an observation or measurement) or accidental – causes the superposition to collapse into a state of decoherence in which only one of all possible coherent states exists. When two or more objects – be they subatomic particles, atoms, molecules, or even small but macroscopic diamonds – are in a state of entanglement, a change in a property of one instantaneously appears as the inverse change in the same property of the other, and does so instantaneously – i.e., no time elapses – regardless of the distance between the two entangled objects. Since entanglement is critical factor in quantum information, and decoherence can degrade or terminate entanglement (the latter referred to as entanglement sudden death), preserving coherence is vital to the development of quantum computing, quantum cryptography, quantum teleportation, quantum metrology and other quantum information applications. Recently, scientists in the Department of Physics at Pohang University of Science and Technology (POSTECH) have devised a way to protect entanglement by mitigating decoherence using weak measurement and quantum measurement reversal. Led by Yoon-Ho Kim, the research team – which also included Yong-Su Kim, Jong-Chan Lee and Osung Kwon – faced a series of challenges in protecting entanglement from decoherence. “In the past,” Yoon-Ho Kim tells PhysOrg.com, “we’ve worked on experimental demonstrations of suppressing decoherence for a single-qubit state. We therefore knew that the effect of decoherence can be suppressed by using weak measurement and reversing measurement for a single-qubit. In the current study, our initial question was whether this general approach – that is, using weak and reversed measurements – could work for entangled states.”Since their approach makes use of quantum measurement, which is often destructive and entanglement-breaking, their initial intuition was that it might not. “In some sense,” Kim explains, “the main challenge was overcoming this mental barrier of intuition at first sight and getting ourselves to actually formulate the problem mathematically. Quite often in quantum physics, intuition based on daily experience is not always correct, because more often the quantum effects are counter-intuitive.”However, once their theoretical studies convinced the researchers that the approach would work for entangled states, making it possible to protect entanglement from decoherence using weak measurement and quantum measurement reversal, they focused on a clear way to demonstrate the effect experimentally. “The challenge here,” Kim notes, “was to be able to vary the amount of decoherence and the strengths of weak and reversing measurements precisely so that the essence of the protocol could be clearly demonstrated. We were able to develop a linear optical method to precisely control the amount of decoherence based on a displaced Sagnac interferometer.” A Sagnac interferometer uses counter-propagating light beams to measure its own angular velocity with respect to the local inertial frame.There are other innovations possible, notes Kim. “In this work, we demonstrated a way to protect two-qubit entanglement from a particular type of decoherence. However, the general approach to use weak measurement and reversing measurement for suppressing the effect of decoherence should be valid for other types of decoherence and for multipartite entangled states. Decoherence is often unavoidable in real life – and to date, several attempts to directly reduce decoherence have been made without much success. I think, in the future, we might battle decoherence using the protocol described in our work, since it’s a much more subtle and potentially effective way to battle decoherence. If we find weak and reversing measurements appropriate for a given type of decoherence,” Kim adds, “practically all real-world quantum information implementations could benefit by utilizing the protocol.” Explore further Citation: Keeping it together: Protecting entanglement from decoherence and sudden death (2012, February 27) retrieved 18 August 2019 from https://phys.org/news/2012-02-entanglement-decoherence-sudden-death.html This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only. More information: Protecting entanglement from decoherence using weak measurement and quantum measurement reversal, Nature Physics 8, 117–120 (2012), doi:10.1038/nphys2178. Breakthrough in quantum computing: Resisting ‘quantum bug’
(Phys.org) — For over nine thousand years people have been relying on various yeast strains to carry out fermentation of food and drink products resulting in the flavorful breads and alcohol fueled beverages we consume, but until now, no one really knew how it was that the yeast managed to appear on schedule every year to help us out, especially in places where freezing winter temperatures would seem to make that impossible. Now, thanks to new research by a combined team of French and Italian researchers, the answer seems to have been found, and as they write in their paper published in the Proceedings of the National Academy of Sciences, it appears that we have the wasp to thank. For years scientists have assumed that yeast’s ability to show up on virtually every grape that appears in a vineyard was most likely due either to birds or bees. The yeast lack a means of moving themselves about and depending on wind to carry them seemed too random. But until now, no one had figured out just who or what was responsible or was able to explain how they survived through long cold winters. Some had investigated birds, but they found that the fungus didn’t survive in them long, which led naturally to insects. But which ones? The team narrowed down the possible choices quickly by focusing only on those that are able to not only survive through the winter, but to produce new young in the spring. That led to wasps, which hibernate then build nests in the spring leading to new young offspring.To find out if wasps were indeed the savior of yeasts, the team collected samples from seventeen different areas in and around vineyards in Italy. Lo and behold they found that the majority of them harbored yeast in their guts, and what’s more did so throughout all four seasons, including just prior to hibernation and just after. They also found that the yeast turned up in the guts of the young shortly after they were first fed ensuring that the yeast could carry on indefinitely.The researchers also found that the wasps harbored all manner of yeasts, noting over 230 strains in just those they studied, some of which matched those used to make some of our foods and drinks and some that live in the wild. Thus it seems that wasps are at least one of the major players in the life cycle of yeasts. The research team isn’t suggesting that wasps are solely responsible for allowing yeasts to move between grapes and other berries or for helping them make it through the winter, but at least now, one source is known for sure. Journal information: Proceedings of the National Academy of Sciences © 2012 Phys.org Citation: Researchers find wasps are the key to yeast’s ability to survive through winter (2012, July 31) retrieved 18 August 2019 from https://phys.org/news/2012-07-wasps-key-yeast-ability-survive.html More information: Role of social wasps in Saccharomyces cerevisiae ecology and evolution, PNAS, Published online before print July 30, 2012, doi: 10.1073/pnas.1208362109AbstractSaccharomyces cerevisiae is one of the most important model organisms and has been a valuable asset to human civilization. However, despite its extensive use in the last 9,000 y, the existence of a seasonal cycle outside human-made environments has not yet been described. We demonstrate the role of social wasps as vector and natural reservoir of S. cerevisiae during all seasons. We provide experimental evidence that queens of social wasps overwintering as adults (Vespa crabro and Polistes spp.) can harbor yeast cells from autumn to spring and transmit them to their progeny. This result is mirrored by field surveys of the genetic variability of natural strains of yeast. Microsatellites and sequences of a selected set of loci able to recapitulate the yeast strain’s evolutionary history were used to compare 17 environmental wasp isolates with a collection of strains from grapes from the same region and more than 230 strains representing worldwide yeast variation. The wasp isolates fall into subclusters representing the overall ecological and industrial yeast diversity of their geographic origin. Our findings indicate that wasps are a key environmental niche for the evolution of natural S. cerevisiae populations, the dispersion of yeast cells in the environment, and the maintenance of their diversity. The close relatedness of several wasp isolates with grape and wine isolates reflects the crucial role of human activities on yeast population structure, through clonal expansion and selection of specific strains during the biotransformation of fermented foods, followed by dispersal mediated by insects and other animals. Explore further Hornet (Vespa crabro) (worker). Image: Niek Willems/Wikipedia. Taking the stress off yeast produces better wine This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.
The researchers, Maurizio Consoli from INFN, along with Alessandro Pluchino and Andrea Rapisarda from INFN and the University of Catania in Italy, have published a paper on their reinterpretation of the ether-drift experiments in a recent issue of EPL.”The main significance of our work is that those small residual effects found in all ether-drift experiments in gaseous systems, usually considered as uninteresting thermal disturbances, might instead represent the first experimental evidence for the Earth’s motion within the CBR,” Consoli told Phys.org. “The interest of our work is twofold. On the one hand, there is an important historical interest because these small effects were obtained well before the discovery of the CBR (in 1965 by Penzias and Wilson, Astrophys. J. Lett. 142 (1965) 419) and well before the observation of the Earth’s motion within the CBR (in 1977 by Smoot, Gorenstein and Muller, Phys. Rev. Lett. 39 (1977) 898, with the U2 experiment). On the other hand, the agreement between optical measurements in a laboratory and direct measurements of the CBR in space would indicate how powerful optical interferometers can be.” BackgroundIn the 19th century, one of the most controversial questions in science was whether empty space was truly empty like a vacuum or whether it contained a medium known as ether. Ether had been proposed as a medium through which light waves could travel, since light’s wave-like properties had recently been discovered, and it was believed that all waves needed some kind of medium to propagate through.The first crucial experiment to provide evidence that ether does not exist as classically predicted came in 1887, when Michelson and Morley famously showed that light travels at the same speed in the perpendicular arms of an interferometer. If ether existed—and if it behaved as expected—it would be predicted to slow light down more in one direction than in the perpendicular direction, since motion through the ether is “anisotropic,” or directionally dependent. Explore further Design of a modern ether-drift experiment, in which the light frequencies from two lasers in perpendicular directions are compared at a beat note detector. Credit: M. Consoli, et al. ©2016 EPLA Most precise test of Lorentz symmetry for the photon finds that the speed of light is indeed constant More information: M. Consoli, et al. “Cosmic Background Radiation and ‘ether-drift’ experiments.” EPL. DOI: 10.1209/0295-5075/113/19001 . Also at arXiv:1601.06518 [astro-ph.CO] According to Consoli, Pluchino, and Rapisarda, however, the Michelson-Morley experiment did not yield a strictly null result because of its limited precision. The classically predicted effect of ether-drift was 10-8, while the Michelson-Morley experiment had a precision of 10-9—precise enough to rule out the classically predicted effect, but hazy enough to leave some room for the possibility of a smaller-than-expected effect. In the years since, 20 or so similar experiments have been performed that further increased the precision, with the most recent test achieving the highest precision yet of 10-18 just last year (M. Nagel et al., Nature Comm. 6 (2015) 8174). Scientists performing these tests today aren’t as interested in disproving the existence of ether as they are in validating the foundations of Einstein’s theory of special relativity. A difference in the speed of light in any two perpendicular directions in an inertial reference frame would contradict special relativity, since relativity requires a vacuum within which the speed of light is constant. Many important concepts emerge from this fact, including that the universe has no preferred reference frame, that there is no absolute space or time, and that you can never really tell for sure whether you are at rest or in constant motion since all motion is relative. A matter of interpretationAt first glance, the 20 or so experiments performed since 1887 seem to have steadily improved the precision in support of the view that there is no ether and no preferred reference frame.However, not all the results were perfectly unambiguous. In particular, Dayton Miller’s 1933 optical interferometer experiment detected so-called “fringe shifts,” which were much smaller than any expected effect but large enough to be considered non-negligible. To make things more complicated, these residual effects were irregular and not consistently reproducible. At the time, the fringe shifts were written off to temperature differences inside the lab, since fluctuations of even a tiny fraction of a degree could have caused the observations. However, Miller disagreed with this interpretation. He argued that his measurements could not be explained by uniform heating, only by some non-uniform, directional effect. Miller’s experiment was not the only one to detect unexpectedly tiny effects. As Consoli and Pluchino, together with Caroline Matheson at the University of Cambridge, showed in a previous paper (Eur. Phys. J. Plus 128 (2013) 71 and arXiv:1302.3508 [physics.gen-ph]), all of the ether-drift experiments that were performed in gaseous media (either air or helium), including the original Michelson-Morley experiment, detected very small residual effects that were generally ignored.In their new paper, the researchers argue that all of these residual effects may provide the first indirect evidence of the temperature anisotropy caused by the Earth’s motion within the CBR. Previous direct observations of the CBR in space found that the entire solar system is moving at a velocity of about 370 km/sec toward a specific point in the sky, creating a “kinematic dipole,” which is basically a Doppler effect. As a consequence, “an observer moving through the CBR would see different temperatures in different directions,” as the researchers explain in their paper. But so far no study has actually measured in a laboratory the predicted temperature gradient caused by the solar system’s movement.The scientists explained that the temperature gradient is universal in the sense that all observers moving within the CBR (even hypothetical observers living on distant planets) will see a qualitatively similar effect. However, the quantitative aspects are different and depend on the particular state of motion of each planet and its surroundings. “This motion is called ‘peculiar’ because it is characteristic of our local position in the universe,” Pluchino said. “In fact, it is obtained by combining the motion of our galaxy, and of the local group of galaxies, with a velocity of about 600 km/sec toward what is called the Great Attractor (a large concentration of matter situated at about 100 Mpc from us), along with the motion of the solar system within our galaxy. Therefore, an observer placed at the opposite site of our galaxy will also see a dipole anisotropy but, for him, the kinematical parameters will be different.”By analyzing the data from the interferometer experiments in gaseous systems, the researchers found that all of the residual effects produced velocities in good agreement with the theoretical velocity of 370 km/sec. “The average Miller fringe shift was giving exactly the same observable velocity as in Michelson-Morley,” Pluchino said. “Therefore, the standard thermal interpretation of Miller’s data is only acceptable if this temperature effect has a non-local origin, i.e., it does not depend on the particular conditions of its laboratory.”The scientists explain that the reason why this temperature effect and velocity had not been noticed before now is because their new interpretation is based on relativity.”The basic difference with the standard point of view is that one should correctly reinterpret the observations according to relativity and not just use the (incorrect) classical formulas,” Pluchino said.The researchers also addressed the question of why not all of the ether-drift experiments detected fringe shifts, only those performed in gaseous media. As technology progressed over the decades, researchers carried out the experiments in different media, such as a vacuum or solid dielectrics. Traditionally, researchers have thought that the experiments were all testing the same thing, regardless of the medium.But the Italian scientists think otherwise. If their proposal is correct, then the CBR temperature gradient would affect gaseous systems more strongly than others because the weakly bound gas molecules can be set in motion more easily by a thermal gradient than the molecules in a solid, which would not be expected to move very much. And in the case of a vacuum, a temperature gradient would have no matter to act on at all. ImplicationsThe scientists suggest that their interpretation could be tested by a new generation of precise laser interferometers, which like the early experiments could attempt to detect small differences in the speed of light between perpendicular directions in a gaseous medium. This type of experiment would provide a much more precise test than those of the ’20s and ’30s, and even more precise than the most recent test in a gaseous medium, which was performed in the ’60s (which also detected residual time variations that the scientists show are consistent with the new interpretation).If future experiments detect the expected signal, it would have far-reaching consequences on everything from physics to biology. “If this temperature gradient could be definitely confirmed in a laboratory, it would mean that everything on Earth (and on any other celestial body moving through the CBR) is exposed to a tiny energy flow,” Consoli said. “This flow is now very weak but, in the past, was substantially stronger when the CBR temperature was higher. Therefore, it has represented (and still represents) a sort of background noise which is independent of any localized source. It is known that such non-equilibrium condition can induce (or it could have induced) forms of self-organization in matter. Therefore, our result could also be relevant for those research areas which look for the origin of complexity in nature.”What do the results mean for special relativity?”Clearly, the whole analysis uses a relativistic formalism,” Consoli said. “But the final picture is different from standard special relativity. In fact, the temperature gradient due to the Earth’s motion within the CBR can affect a weakly bound gaseous matter and, therefore, the velocity of light propagating inside it. Small differences in perpendicular directions can thus be detected with an interferometer. In this way, with measurements performed entirely inside a laboratory, one could distinguish between a state of rest and a state of uniform motion. This is not too surprising since the CBR was not known in 1905.” Rather than speculate on this area, the researchers prefer to focus on more practical implications.”Our findings emphasize the power of optical interferometry in a laboratory,” Consoli said. “This could give precious, complementary information to the direct observations of the CBR in space.” Journal information: Europhysics Letters (EPL) © 2016 Phys.org (Phys.org)—In a new study, scientists have proposed that tiny residual effects measured by ether-drift experiments in the 1920s and ’30s may be the first evidence of a temperature gradient that was theorized in the 1970s, but never before detected in a laboratory. The theorized temperature gradient is thought to be caused by the solar system moving at 370 km/sec through the cosmic background radiation (CBR), which is the faint electromagnetic radiation that fills the universe. Citation: Could 80-year-old ether experiments have detected a cosmological temperature gradient? (2016, February 8) retrieved 18 August 2019 from https://phys.org/news/2016-02-year-old-ether-cosmological-temperature-gradient.html , Physical Review Letters This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.
Ecosystems need maths not random nature to survive This point’s impact, Johnson continues, is based on the fact that feedforward loops can often be related to some form of feedforward control (as used, he notes, by engineers working on assisted automobile steering). “Therefore, feedforward loops in certain biological networks – gene regulatory networks, in particular, but also others such as neural networks – are thought to play an important role in how such systems work. In food webs, feedforward loops are associated with omnivorous species, which have often been reported to have an effect on ecosystem stability – although some say that effect is positive and others negative!”The researchers are also investigating whether negentropy – the opposite of entropy, and in which a physical, thermodynamic or biological process creates order – are affected by trophic coherence. “The modern concept of entropy,” Johnson points out, “comes from statistical physics and is a property of ensembles, as described above – that is, the entropy of an ensemble is simply a function of the number of elements it contains.” Moreover, he adds, graph ensemble entropy has proven to be a powerful tool for understanding various network properties. We are currently studying the entropy of the coherence ensemble we defined for this work. “In general, higher trophic coherence would be associated with lower entropy states, which means that if networks are more coherent than the random expectation there must indeed be some kind of negentropic process at work.” Johnson notes that the impact in this case relative to trophic coherence would be found in quantifying the extent to which different empirical networks have been driven from their maximum entropy state. “This might be the best way of discovering when there are coherence-inducing mechanisms at work, how much energy must be involved, and ultimately identifying the nature of such processes.”Phys.org also asked Johnson if there are scale boundaries to trophic coherence – for example, is trophic coherence applicable to nanoscale systems or structures, or to quantum mechanics (in which eigenvalues are highly relevant)? “This is an interesting question,” he replied. “We haven’t thought much about this yet – but there’s no reason in principle why trophic coherence shouldn’t be relevant in other settings than the ones we’ve considered, and at other scales. Although we’ve been thinking about trophic coherence as a property of networks, it could just as easily be regarded as a property of matrices, which have many different interpretations and applications in science. Could the concept of trophic coherence be extended to the complex, Hermitian matrices describing quantum operators, for instance?” (A Hermitian matrix is a square self-adjoint matrix equivalent to its own conjugate transpose.) “If so, what would the effect of coherence on eigenspectra mean for physical observables? We hope these and other open questions will attract the attention of researchers in the relevant fields, who may be able to take the work further.”Johnson also noted that while certain natural systems are unsurprising given their trophic coherence, this is not always the case. “Most of the things we measured in our set of empirical networks were actually close to what we would predict given their trophic coherence. The exceptions were a couple of food webs which, curiously, have no cycles despite being in the loopful regime – but this doesn’t imply, by any means, that everything is determined by a network’s trophic coherence, since there are a great many other quantities which we haven’t yet considered. What was somewhat surprising, however, was that while the gene regulatory networks seem highly coherent, they are all actually rather close to what would be their random expectation, which is due to their tendency to have many basal nodes.” Johnson explains that these networks – which he says underlie all the processes that cells are capable of performing, and determine the various kinds of cells they can turn into – must have been fine-tuned by evolution in myriad ways. “It seems therefore surprising that their trophic coherence shows little deviation from our random expectation. On the other hand, the metabolic networks are all highly incoherent, as compared with the random expectation, but we have as yet no idea why this might be.”Other coherence- or incoherence-inducing mechanisms would presumably alter a network in such a way that the probability of an edge occurring between two nodes depends on their trophic levels, he continues, pointing out that this might happen because trophic levels reflect some other node characteristic, their intranetwork function, or their position in one or more dimensions. “For example, in the case of food webs,” he illustrates, “there are several biological features of species which are related to trophic levels, so it is natural that if a given predator has specialized in consuming species A, it is more likely to prey also on B if A and B are at similar levels. However, in some ecosystems species can also occupy different positions in space – for instance, they might exist at different depths in a lake – which could also affect coherence. Moreover, in a social network, people might interact with others according to their job, or their status – but neurons, genes, or words in a text are connected to others, which have particular functional roles. While we’d expect to find mechanisms which led to edges forming preferentially between nodes according to these kinds of features, functions, or dimensions, there are probably other ways which we haven’t yet thought of.”Another question is how concepts such as trophic coherence might be understood when a distinction between excitatory and inhibitory interactions is made. “There are at least two ways in which it might be useful to define trophic levels, and thus coherence, in this case. One would be simply to attribute a negative value to inhibitory interactions, but keep other definitions broadly the same, so that trophic levels could be either positive or negative,” Johnson tells Phys.org. “Another is to separate the effects of excitatory and inhibitory interactions as if they were in different networks, so that each node would have two different trophic levels, and there would be an excitatory coherence and an inhibitory one. This fits in with work currently being done on so-called multiplex networks. In the end, we would have to see which definition proves most useful for understanding real-world networks.”Moving forward, Johnson says, he and his colleagues are investigating avenues that follow from the research under discussion, such as extending the concepts of trophic levels and coherence to a broader class of networks – for example, those with weighted edges or many layers. “We then hope to use these in conjunction with other well-established network measures to identify functional groups of nodes in specific systems, such as gene regulatory networks or ecosystems. Another of our interests is the integration of these results within a more general mathematical framework relating structure and dynamics in complex systems. Finally,” he concludes, “there are questions in ecology that this work might illuminate, including how best to model food webs, and whether there are network properties of ecosystems that could alert us to the risk of a tipping point, such as a cascade of extinctions.”Johnson adds that he and Jones are both working on several other topics as well as networks. “For instance, I have various ongoing collaborations with people at Warwick and Granada looking into the relationship between human conflict and geography, or how certain findings in neuroscience can be understood and modelled mathematically.”Regarding other areas of research that might benefit from their study, Johnson says that the most immediate would be complex networks and graph theory, where our results should be of interest to people studying graph ensembles, the relationships between different topological quantities, or the stability of complex, dynamical systems. “As mentioned above, there are some results which are particularly relevant for ecologists, especially those engaged in modelling ecosystems. We hope that some of these ideas will be picked up by researchers in other areas where systems can be fruitfully regarded as networks – I have mentioned genetics, but there are several others, such as neuroscience, sociology, or economics – and developed further.” Dr. Samuel Johnson discussed the paper that he and Dr. Nick S. Jones published in Proceedings of the National Academy of Sciences. “Demonstrating that trophic coherence is a property found in a wide range and scale of ecosystems and networks was actually easier than we had expected,” Johnson tells Phys.org. “We’d previously identified trophic coherence as an important property of food webs1, in which our main result was the role trophic coherence played in ecosystem stability.” (Food webs are ecosystem networks of species trophic levels – that is, what a species eats, and what it is eaten by – and in fact, the word trophic derives from the Greek τροφή (pronounced trophē), which refers to food or nourishment.) “Ecologists have long characterized species in food webs by their trophic levels, so the idea of measuring how well defined these levels were seemed very natural.”However, he points out that while researchers have, over the last 15 or so years, defined and studied a great many quantities associated with complex networks, it appears that the role of trophic levels in networks other than food webs have not been studied. “All we had to do was get the data other researchers have made available for various different kinds of networks, and measure the trophic levels and coherence associated with them, he explains. “Then, when we set about developing a mathematical framework that could relate trophic coherence to other network quantities, one of the first steps was to derive equations for the expected values of trophic coherence and mean trophic levels in random graphs – that is, the values we’d expect a network to have if the edges had been placed randomly between the nodes. This in turn allowed us to investigate a given empirical network and conclude, for instance, whether it was more or less coherent than if it was random.”Regarding their derivation of analytic mathematical expressions that show looplessness is a likely consequence of trophic coherence, Johnson recounts, the scientists could see intuitively – or by drawing pictures of networks with greater and lesser coherence – that this property was related to the likely number of cycles (or loops) in directed networks (that is, those in which the links, or edges, have a direction). In order to study this relationship mathematically, he adds, they employed the statistical physics method of ensembles – virtual collections of a large to infinite number of identical systems whose behavior are inferred from the ensemble’s aggregate behavior – which has been used to study random graphs. Four directed networks, plotted so that the height of each node on the vertical axis is proportional in each case to its trophic level. The top two are synthetic networks, generated in a computer with the ‘preferential preying model’, which allows the user to tune trophic coherence (measured with the incoherence parameter, q). Thus, they both have the same numbers of nodes and edges, but the one on the left is perfectly coherent (q=0) while the one on the right is more incoherent (q=0.7). The bottom two are empirically derived: the one on the left is the Ythan Estuary food web, which is significantly coherent (it has q=0.42, which is about 15% of its expected q) and belongs to the ‘loopless’ regime; the one on the right is a representation of the Chlamydia pneumoniae metabolic network, which is significantly incoherent (q=8.98, or about 162% of the random expectation) and sits in the ‘loopful’ regime. The top two networks are reproduced from the SI Appendix of Johnson et al, “Trophic coherence determines food-web stability” (PNAS, 2014), while the bottom two are from the SI Appendix of Johnson & Jones, “Looplessness in networks is linked to trophic coherence” (PNAS, 2017). Courtesy: Dr. Samuel Johnson. More information: Looplessness in networks is linked to trophic coherence, PNAS (2017) 114(22):5618-5623, doi:10.1073/pnas.1613786114Related:1Trophic coherence determines food-web stability, PNAS (2014) 111(50):17923-17928, doi:10.1073/pnas.1409077111 The scientists credit a moment that proved key to their investigation. “Our crucial insight was that given its trophic coherence, we could associate the expected number of cycles in a network, with the probability that a particular kind of random walker on a line would return to its starting point.” Random walkers – imaginary objects whose movement is determined a random selection between two or more choices at each increment, or hop. “Random walkers have proven useful concepts in a wide range of contexts,” Johnson notes, “from Albert Einstein’s explanation of Brownian motion that proved the existence of molecules, to Sergei Brin and Larry Page’s PageRank algorithm that gave rise to Google. In our case, we defined random walkers whose hops were drawn from a distribution centred at one and with standard deviation equal to the network’s trophic incoherence.” The researchers found that higher incoherence was associated with a higher probability of the walker returning to its origin as well as a higher prevalence of loops in the associated network.With this method, Johnson tells Phys.org, they were able to obtain expectations and probability distributions for several quantities of interest as a function of trophic coherence, which they termed the coherence ensemble. Moreover, they found that once the trophic coherence was taken into account, the numbers of cycles and related magnitudes measured in all the empirical networks they studied were very close to their theoretical expectations. “From this we were able to conclude that trophic coherence and properties such as looplessness” (which they loosely define as having few or no cycles) “were closely related.”It could, of course, be the case,” Johnson acknowledges, “that certain classes of real networks are coherent as a consequence of some process which suppressed cycles. For instance,” he illustrates, “if ecosystems with too many cycles tended to become unstable and collapse, then perhaps only loopless ones survived, and trophic coherence followed from that. However, when we generated networks in a computer so as to have no cycles, we found that this does not induce trophic coherence, while those generated to be sufficiently coherent are loopless.” The researchers therefore concluded that coherence-inducing mechanisms are most likely responsible for looplessness in nature.In addition to the examples of looplessness resulting from trophic coherence mentioned in their paper, Johnson discussed several classes of networks in which trophic levels are likely to be related to some kind of node function, as seems to occur with syntactic function in word adjacency graphs. “We’d expect that if we could obtain data on such systems, we might find that their trophic coherence or incoherence plays a role in their behavior, via its effects on looplessness or loopfulness, as the case may be. More broadly, we believe that classifying the nodes in such networks by trophic level might be useful, as is the case of ecosystems.” For example, he illustrates, power relations between people in various kinds of organizations might follow this pattern. “Imagine an army, a corporation, or a whole society, in which each person is a node and a directed edge (aka arrow) points from every individual to those to whom they report, or owe some kind of obedience. A person’s trophic level would give an indication of their hierarchical position, and perhaps the trophic coherence of the whole system might be related to the speed of information transmission or its robustness to revolts. This is something we’re currently thinking about.”The scientists are also hoping to study the meaning of trophic levels in neural networks. “We included only one example of these in our paper – the much-studied brain of the C. elegans worm – but we’re interested in effects on computational abilities, in which feedback loops can be very important. It’s curious that neural networks used for deep learning are perfectly coherent – so what might a bit of incoherence do?”While not discussed in this paper, Johnson and Phys.org discussed the question of whether the number of a system’s feedforward loops is affected by trophic coherence. “It’s very interesting you should ask that! As part of his doctoral work, Janis Klaise has been looking into this very question – and we have a paper submitted showing that this is indeed the case. It has been known for some time that if one studies the motif profiles of empirical networks – that is, the prevalence of each of the possible ways in which triplets of nodes can be connected – there are several broad families of networks with similar profiles.” There are two main groups of food webs, he illustrates, differing primarily in whether the feedforward loop is under- or over-represented, thereby corresponding to more or less trophically coherent food webs, respectively. Explore further Journal information: Proceedings of the National Academy of Sciences Network of concatenated words from Green Eggs and Ham, by Dr. Seuss . The height of each word is proportional to its trophic level. Colours indicate syntactic function; from lowest to highest mean trophic level: nouns (blue), prepositions and conjunctions (cyan), determiners (pink), adverbs (yellow), pronouns (green), verbs (red), and adjectives (purple). When a word has more than one function, the one most common in the text is used. Credit: Johnson S, Jones NS (2017) Looplessness in networks is linked to trophic coherence. Proc Natl Acad Sci USA 114(22):5618-5623. (Phys.org)—Complexity – defined as having emergent properties or traits that are not a function of, and are therefore difficult or inherently impossible to predict from, the discrete components comprising the system – is a characteristic of complex systems at a wide range of scales (such as genes, neurons and other cells, brains, computers, language, and both natural and sociopolitical ecosystems) that comprise interconnected elements capable of self-modification via feedback loops. At the same time, there are networks (biological and otherwise) that have far fewer of these loops than might be expected – but while these low feedback loop networks are known to be display high stability, the mechanism for feedback suppression (which imparts that stability) has remained unidentified. Recently, however, scientists at University of Warwick and Imperial College London have shown that the level of feedback in complex systems is a function of trophic coherence – a property that reveals the distribution of nodes into high- and low-feedback network levels. Citation: Trophic coherence explains why networks have few feedback loops and high stability (2017, August 14) retrieved 18 August 2019 from https://phys.org/news/2017-08-trophic-coherence-networks-feedback-loops.html This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only. © 2017 Phys.org
The American writer-director is known for the rare infusion of hilarity and melancholy in his films. The festival will showcase all the movies in the chronological order of the year of direction during the six days. Each day after the film is showcased, it will be followed by an interactive session by industry professionals and a light quiz with exciting prizes.The festival will open with Bottle Rocket, the directorial debut of Anderson, featuring Luke Wilson, Owen Wilson and Ned Dowd in a story of young Texan master thieves. The interactive sessions after the films will include exploring the thematics, camera techniques, aesthetics, music in Wes Anderson movies. Also Read – ‘Playing Jojo was emotionally exhausting’Rushmore, The Royal Tenenbaums, The Life Aquatic with Steve Zissou, The Darjeeling Limited, Fantastic Mr. Fox and Moonrise Kingdom will be screened in the subsequent days.The cast and crew of The Darjeeling Limited, casting director Dilip Shankar and actress Charu Shankar will also be present after the screening on 20 December, Friday to interact and share their experiences with the audience.To complete the festive mood, continuous music from Anderson’s film will be played through out the festival. Also Read – Leslie doing new comedy special with NetflixNeedle in the Hay (Elliott Smith), I am Waiting (The Rolling Stones) and other compositions of Randall Poster are some of the songs on the playlist. Wes Anderson commercials, interviews and short films will also be shown in the interludes. The whole centre will be in its true American mood bringing the Indian audiences closer to America.When: 16-21 December, 6 pm onwards.Where: The American Center, Kasturba Marg.
Kolkata: The results of the three-tier Panchayat election which was held on May 14 have witnessed the rise of Independent candidates, who have occupied the third position in Gram Panchayat. The Independent candidates have bagged 1,335 seats in Gram Panchayat, which is more than the number of seats bagged by CPI-M and Congress so far.Most of these Independent candidates are rebel Trinamool Congress leaders. At the time of distribution of party tickets before the election, party supremo Mamata Banerjee had said that it would not be possible to give tickets to all the party workers and that those who will fail to get tickets, will be used by the party in some other capacity. Also Read – Heavy rain hits traffic, flightsHowever, the leaders could not hold their patience and contested the poll as Independent candidates. Trinamool Congress secretary general Partha Chatterjee has already announced that those who have filed nomination as Independent candidates flouting the party’s diktats, will be thrown out of the party.The rebel Trinamool Congress candidates were backed by CPI-M and BJP wholeheartedly. They had helped the candidates with money and muscle power. Many areas saw clashes between Trinamool Congress workers and Independent candidates.BJP and CPI-M had also filed their candidates without party symbol as Independent candidates, just to establish their control over the Gram Panchayats, which is the lowest tier of the three-tier Panchayat system.After the election results, many Independent candidates who are rebel Trinamool Congress leaders and BJP workers, have contacted Trinamool Congress, expressing their willingness to join the party.
“We have gone through the Curative Petitions and the relevant documents. In our opinion, no case is made out within the parameters indicated in the decision of this Court in Rupa Ashok Hurra vs. Ashok Hurra & Another, reported in 2002 (4) SCC 388. “Hence, the Curative Petitions are dismissed,” a five- judge Bench headed by Chief Justice HL Dattu said.The brief decision of the Bench, also comprising justices TS Thakur, AR Dave, Ranjan Gogoi and Shiva Kirti Singh, came on the curative pleas filed by the then UPA government against dismissal of the review petitions. The review
Darjeeling: As sexual crimes against minors are steadily on the rise in various parts of the country, including Darjeeling, the government has undertaken various efforts to tighten the noose on child molesters by making punishment against rape more stringent, along with the rest of the country.”There were two incidents of abuse of minors in the past 72 hours. A total of 5 persons have been arrested in connection with these two separate cases,” stated Pranay Rai, Public Prosecutor, Darjeeling. Also Read – Rain batters Kolkata, cripples normal lifeOn Sunday, a three-and-a-half-year-old girl was allegedly abused in Darjeeling. According to the FIR, she was molested by a 49-year-old man, who was known to her father. Sources stated that the incident occurred when the victim was at home withher father.49-year-old Ramashanker Saha, the neighbor, arrived at the house and had offered drinks to the victim’s father. When the father was in an inebriated state, he had committed the crime.Later, the victim bleeding profusely and in grave trauma, broke down in front of her mother. An FIR was lodged and Saha was immediately arrested. “He has been charged under Section 6 of the Protection of Children from Sexual Offences (POCSO) Act,” stated the Public Prosecutor. Also Read – Speeding Jaguar crashes into Mercedes car in Kolkata, 2 pedestrians killedIn another case, a 16-year-old was allegedly gangraped by four men on Saturday in Darjeeling. Laku Subba, Wangdi tamang, Kamal Mothay and Brijesh Mothay have been arrested under Section 6 of the POCSO Act and remanded to judicial custody. “These are special cases. Though we do not have fast track courts, we will try to end the trial within a month,” stated Rai. Incidentally, the Cabinet has already approved an Ordinance (Executive Order) awarding death penalty in case of rape of girls below the age of 12. The Cabinet has further recommended the increase of punishment to 20 years of rigorous imprisonment in case of rape of a girl below the age of 16. In case of women being raped, the punishment is to be increased to 10 years of rigorous imprisonment from the existing 7 years.The Cabinet has prescribed the mandatory completion of investigation and trial of rape cases withintwo months.Establishing more fast track courts, the appointment of PPs and special forensic kits to be made available at police stations has been recommended.As per reports in 2016, alone 40,000 rape cases had been registered in India, out of which 40% of the cases involve minors.