|
NEWS
Mar 5, 2009 2:21:10 GMT 4
Post by towhom on Mar 5, 2009 2:21:10 GMT 4
Ben-Gurion University engineers develop technique to help combat nuclear proliferationA form of Americium (Am 241) could 'de-claw' nuclear fuel producers ensuring only peaceful plutonium useEurekAlert Public Release: 4-Mar-2009www.eurekalert.org/pub_releases/2009-03/aabu-bue030409.phpBEER-SHEVA, ISRAEL – Ben-Gurion University of the Negev engineers have developed a technique to "denature" plutonium created in large nuclear reactors, making it unsuitable for use in nuclear arms. By adding Americium (Am 241), a form of the basic synthetic element found in commercial smoke detectors and industrial gauges, plutonium can only be used for peaceful purposes. This technique could help "de-claw" more than a dozen countries developing nuclear reactors if the United States, Russia, Germany, France and Japan agree to add the denaturing additive into all plutonium. An article on the technique and findings will appear next month in the Science and Global Security journal. "When you purchase a nuclear reactor from one of the five countries, it also provides the nuclear fuel for the reactor," explains Prof. Yigal Ronen, of BGU's Department of Nuclear Engineering, who headed the project. "Thus, if the five agree to insert the additive into fuel for countries now developing nuclear power -- such as Bahrain, Egypt, Kuwait, Libya, Malaysia, Namibia, Qatar, Oman, United Arab Emirates, Saudi Arabia and Yemen -- they will have to use it for peaceful purposes rather than warfare." Ronen originally worked on Neptonium 237 for the purpose denaturing plutonium, but switched to Americium, which is meant for pressurized water reactors (PWRs), such as the one being built in Iran. "Countries that purchase nuclear reactors usually give the spent fuel back to the producer," explains Ronen. "They wouldn't be able to get new plutonium for weapons if it is denatured, but countries that make nuclear fuel could decide not to denature it for themselves." Nuclear fuel used in nuclear reactors has two isotopes of uranium. One is fissionable, while the other is not. The unfissionable component undergoes a number of nuclear reactions, turning some of it into plutonium. The plutonium also includes fissionable and unfissionable components. The amount of fissionable components created in nuclear reactors is enough to be used as nuclear weapons. About Ben-Gurion University of the Negev and American Associates Ben-Gurion University of the Negev is a world-renowned institute of research and higher learning with 18,000 students on campuses in Beer-Sheva, Sede Boqer and Eilat in Israel's southern desert. It is a university with a conscience, where the highest academic standards are integrated with community involvement, committed to sustainable development of the Negev. Founded in 1972, American Associates, Ben-Gurion University of the Negev plays a vital role in helping the University fulfill its unique responsibility to develop the Negev, reach out to its local community and its Arab neighbors, and share its expertise with the world. For more information, please visit www.aabgu.org.Uh, Americium, huh...
Well, let's check this out, shall we?en.wikipedia.org/wiki/Americiumperiodic.lanl.gov/elements/95.htmlNow yesterday there was a big bruh-hah-hah about increased ionizing radiation exposure cited in NCRP Report No. 160. And now you want to add this element to nuclear fissionable materials. Not to mention, it is already being used in smoke detectors?
What are you saying here?
Pretty soon we won't need lights - we'll all "glow in the dark".
|
|
|
NEWS
Mar 5, 2009 2:30:29 GMT 4
Post by towhom on Mar 5, 2009 2:30:29 GMT 4
Mountain on Mars may answer big questionRice study hints at water – and life – under Olympus MonsEurekAlert Public Release: 4-Mar-2009www.rice.edu/nationalmedia/news2009-03-04-mars.shtmlThe Martian volcano Olympus Mons is about three times the height of Mount Everest, but it's the small details that Rice University professors Patrick McGovern and Julia Morgan are looking at in thinking about whether the Red Planet ever had – or still supports – life. Using a computer modeling system to figure out how Olympus Mons came to be, McGovern and Morgan reached the surprising conclusion that pockets of ancient water may still be trapped under the mountain. Their research is published in February's issue of the journal Geology. The scientists explained that their finding is more implication than revelation. "What we were analyzing was the structure of Olympus Mons, why it's shaped the way it is," said McGovern, an adjunct assistant professor of Earth science and staff scientist at the NASA-affiliated Lunar and Planetary Institute. "What we found has implications for life – but implications are what go at the end of a paper." Co-author Morgan is an associate professor of Earth science. In modeling the formation of Olympus Mons with an algorithm known as particle dynamics simulation, McGovern and Morgan determined that only the presence of ancient clay sediments could account for the volcano's asymmetric shape. The presence of sediment indicates water was or is involved. Olympus Mons is tall, standing almost 15 miles high, and slopes gently from the foothills to the caldera, a distance of more than 150 miles. That shallow slope is a clue to what lies beneath, said the researchers. They suspect if they were able to stand on the northwest side of Olympus Mons and start digging, they'd eventually find clay sediment deposited there billions of years ago, before the mountain was even a molehill. The European Space Agency's Mars Express spacecraft has in recent years found abundant evidence of clay on Mars. This supports a previous theory that where Olympus Mons now stands, a layer of sediment once rested that may have been hundreds of meters thick. Morgan and McGovern show in their computer models that volcanic material was able to spread to Olympus-sized proportions because of the clay's friction-reducing effect, a phenomenon also seen at volcanoes in Hawaii. What may be trapped underneath is of great interest, said the researchers. Fluids embedded in an impermeable, pressurized layer of clay sediment would allow the kind of slipping motion that would account for Olympus Mons' spread-out northeast flank – and they may still be there. Thanks to NASA's Phoenix lander, which scratched through the surface to find ice underneath the red dust last year, scientists now know there's water on Mars. So Morgan and McGovern feel it's reasonable to suspect water may be trapped in pores in the sediment underneath the mountain. "This deep reservoir, warmed by geothermal gradients and magmatic heat and protected from adverse surface conditions, would be a favored environment for the development and maintenance of thermophilic organisms," they wrote. This brings to mind the primal life forms found deep in Earth's oceans, thriving near geothermal vents. Finding a source of heat will be a challenge, they admitted. "We'd love to have the answer to that question," said McGovern, noting evidence of methane on Mars is considered by some to be another marker for life. "Spacecraft up there have the capability to detect a thermal anomaly, like a magma flow or a volcano, and they haven't. "What we need is 'ground truth' – something reporting from the surface saying, 'Hey, there's a Marsquake,' or 'Hey, there's unusual emissions of gas.' Ultimately, we'd like to see a series of seismic stations so we can see what's moving around the planet." The paper appears online in Geology at: geology.gsapubs.org/cgi/content/abstract/37/2/139
|
|
|
NEWS
Mar 5, 2009 7:11:48 GMT 4
Post by nodstar on Mar 5, 2009 7:11:48 GMT 4
Hello all - just arrived! Huge congrats Noddie - you are a true (Southern!) Star! Just wanted to let you all know that I'm here but may be quiet for a while - my life is undergoing massive changes right now (but nothing nearly as stressful as Dan & Marci's) and my attention is focussed on learning to cope with that. Hopefully 'normality will be resumed soon' and I will get back to posting properly; until then I shall pop in as and when I can and enjoy reading all your news. Much love Lonetwin Hiya Lonetwin ..!!! I was hoping you would find your way here .. ;D Great to see you posting!! and BE WELCOME !!! Lotsa love Nodstar*
|
|
|
NEWS
Mar 5, 2009 7:26:27 GMT 4
Post by towhom on Mar 5, 2009 7:26:27 GMT 4
Astronomers Detect Two Black Holes in a Cosmic DanceUniverse Today Written by Anne Minard March 4th, 2009www.universetoday.com/2009/03/04/astronomers-detect-two-black-holes-in-a-cosmic-dance/Artist's conception of the binary supermassive black hole system. Each black hole is surrounded by a disk of material gradually spiraling into its grasp, releasing radiation from x-rays to radio waves. The two black holes complete an orbit around their center of mass every 100 years, traveling with a relative velocity of 6000 kilometers (3,728 miles) per second. (Credit P. Marenfeld, NOAO) Paired black holes are theorized to be common, but have escaped detection — until now. Astronomers Todd Boroson and Tod Lauer, from the National Optical Astronomy Observatory (NOAO) in Tucson, Arizona, have found what looks like two massive black holes orbiting each other in the center of one galaxy. Their discovery appears in this week's issue of Nature. Astronomers have long suspected that most large galaxies harbor black holes at their center, and that most galaxies have undergone some kind of merger in their lifetime. But while binary black hole systems should be common, they have proved hard to find. Boroson and Lauer believe they've found a galaxy that contains two black holes, which orbit each other every 100 years or so. They appear to be separated by only 1/10 of a parsec, a tenth of the distance from Earth to the nearest star. After a galaxy forms, it is likely that a massive black hole can also form at its center. Since many galaxies are found in cluster of galaxies, individual galaxies can collide with each other as they orbit in the cluster. The mystery is what happens to these central black holes when galaxies collide and ultimately merge together. Theory predicts that they will orbit each other and eventually merge into an even larger black hole. "Previous work has identified potential examples of black holes on their way to merging, but the case presented by Boroson and Lauer is special because the pairing is tighter and the evidence much stronger," wrote Jon Miller, a University of Michigan astronomer, in an accompanying editorial. The material falling into a black hole emits light in narrow wavelength regions, forming emission lines which can be seen when the light is dispersed into a spectrum. The emission lines carry the information about the speed and direction of the black hole and the material falling into it. If two black holes are present, they would orbit each other before merging and would have a characteristic dual signature in their emission lines. This signature has now been found. The smaller black hole has a mass 20 million times that of the sun; the larger one is 50 times bigger, as determined by the their orbital velocities. Boroson and Lauer used data from the Sloan Digital Sky Survey, a 2.5-meter (8-foot) diameter telescope at Apache Point in southern New Mexico to look for this characteristic dual black hole signature among 17,500 quasars. Quasars are the most luminous versions of the general class of objects known as active galaxies, which can be a hundred times brighter than our Milky Way galaxy, and powered by the accretion of material into supermassive black holes in their nuclei. Astronomers have found more than 100,000 quasars. Boroson and Lauer had to eliminate the possibility that they were seeing two galaxies, each with its own black hole, superimposed on each other. To try to eliminate this superposition possibility, they determined that the quasars were at the same red-shift determined distance and that there was a signature of only one host galaxy. “The double set of broad emission lines is pretty conclusive evidence of two black holes,” Boroson said. “If in fact this were a chance superposition, one of the objects must be quite peculiar. One nice thing about this binary black hole system is that we predict that we will see observable velocity changes within a few years at most. We can test our explanation that the binary black hole system is embedded in a galaxy that is itself the result of a merger of two smaller galaxies, each of which contained one of the two black holes.”
|
|
|
NEWS
Mar 5, 2009 8:13:29 GMT 4
Post by towhom on Mar 5, 2009 8:13:29 GMT 4
Nanotechnology risk research, ten years on2020 Science March 2, 20092020science.org/2009/03/02/nanotechnology-risk-research-ten-years-on/#more-958#ixzz08qZIoLUeTen years ago to the month, one of the first research reports detailing the challenges of ensuring the safe use of engineered nanomaterials was delivered to the UK Health and Safety Executive. The report wasn’t for general release, and you’ll be hard pressed to find a copy of it in the public domain. But as a co-author, I have a copy skulking around in my archives. And given it’s ten year anniversary, I’ve been browsing through it, to find out how much has progressed—or not, as the case may be!
The report focused on ultrafine aerosols, and the Health and Safety Laboratory’s ability to respond to then-current, and future, research needs. As such it was pretty wide ranging, and focused extensively on exposure to incidental nanoscale aerosols—such as welding fume and engine emissions—in the workplace. But it did encompass the then-nascent field of nanotechnology and “nanophase material synthesis.” And some of these early assessments of the field bear revisiting.
For anyone interested in what was being written about the potential health and safety issues raised by engineered nanomaterials ten years ago, I’ve extracted a few sections of the report below—for the full thing, you’ll have to go to the UK Health and Safety Executive.
My apologies that the post is so long—I’m only expecting a dedicated few to plough through it. But at the least, you might want to skip to the end to see how the research recommendations of 1999 compare to those of today—you might be surprised!A scoping study into ultrafine aerosol research and HSL’s ability to respond to current and future research needs. IR/A/99/03 Kenny, Maynard et al. 1999 The introduction to the report starts:Over the past few years a number of epidemiological studies have indicated a tentative link between ambient particulate concentrations, and morbidity and mortality rates (e.g. Dochery et al. 1993, Pope 1996, Schwartz et al. 1993, Schwartz et al. 1991). In all studies, particles with an aerodynamic diameter less than 10 µm (the PM10 fraction) have been implicated as the key agents. The lack of an apparent association between particles of specific composition and health effects has indicated the observed effects to be due to some physical aspect of the inhaled particles. A further link between particle size and health has been indicated by Dochery et al. (1993) who showed a more positive correlation between ill health and particles smaller than 2.5 µm than was seen than with the PM10 fraction. The possibility of correlations between particle size and number concentration and toxicity has been demonstrated by Oberdörster et al. (1995) by exposing rats to PTFE particles ~20 nm in diameter. At concentrations of 106 particles cm-3 (corresponding to an equivalent mass concentration of approximately 60 µg m-3) rats exposed for 30 minutes died within 4 hours. At lower concentrations a steep dose response curve was observed between pulmonary inflammatory responses and particle number. More recent research has begun to indicate a possible material-independent link between inhaled particle surface area and selected toxicological endpoints (e.g. Lison et al. 1997). The possibility of a relationship between fine inhaled particles and ill health is now readily accepted, although research is still at a very early stage and most published data to date are open to a wide range of interpretations. Tentative hypotheses concerning possible mechanisms leading to toxicity have been proposed (e.g. Schlesinger 1995, Seyton et al. 1995, Donaldson and McNee 1998), and the impact of inhaling ultrafine particles on both the respiratory and cardiovascular systems have been speculated on. The US EPA have already acted, partially as a response to earlier epidemiological studies, and introduced the PM2.5 sampling standard for environmental particulates. Whether the UK is to follow this lead is still under discussion. However, despite these steps, research so far has raised more questions than answers. There is debate over the interpretation of the epidemiological studies, and the appropriateness of chosen endpoints in toxicology tests. Contradictory experimental results are beginning to be published regarding ultrafine particle impact on health (e.g. Pekkanen et al. 1997). There also appear to be widely conflicting views on what constitutes an ultrafine particle, with implicit cut-off points ranging from 10 µm down to a few nm!
In amongst all the current confusion is the question of whether the alleged health implications of inhaling ultrafine aerosols are of relevance to the workplace. Much has been made of the apparent health problems amongst vulnerable sectors of the general population following environmental exposures, and the argument is followed through to the conclusion that within a healthy workforce similar problems are unlikely to be seen (backed up by a lack of evidence of severe health problems that are clearly linked to ultrafine aerosols). However, in part the current uncertainty over the toxicity of ultrafine particles is due to the very limited information available on the nature of so-called ultrafine particles. Inhaled particles associated with health in epidemiology studies have been very poorly defined, and even the particles used in most well controlled in vitro and in vivo experiments have been poorly characterised. Without basic information on particle size, morphology, composition and structure, it is clearly not feasible to make value judgements on the nature of inhaled particles, either in the general environment or in the workplace. In the light of the scarcity of information on particle characteristics, the Committee on the Medical Aspects of Air Pollutants has recommended the monitoring of such parameters at a number of environmental locations (COMEAP 1996). Similar measurements will be essential within the workplace before further speculations on the importance of ultrafine aerosols are made. In reading this, it is important to remember that the state of the science is ten years on from when this was written—there are now a wealth of publications on the potentially health-relevant behavior of nanometer-scale particles. Yet the framework of questions set out largely remains as relevant now as it did then.
Perhaps more interestingly, in 1999 the discussion was focused on understanding and managing the health impacts of inhaled particles, NOT whether those particles could be classified as arising from nanotechnology or not. As a result, the document tends to be more grounded in the science of how fine particles potentially impact on health, rather than how the poorly defined field of “nanotechnology” might lead to health effects.
The report goes on to consider the generation of ultrafine aerosols in the workplace:In general, very little is known about any aspect of ultrafine aerosols in the workplace. There are a number of processes such as welding and soldering where intuitively one would expect large numbers of sub-µm particles. However even in these areas, detailed measurements of particle size do not appear to have been made. There is a general feeling that in situations where large concentrations of particles are generated, agglomeration will remove ultrafine particles from the aerosol before it is inhaled, thus removing the need to consider ultrafines. However this has not been verified, and evidence exists for significant mass concentrations of ultrafines existing close to generation sources. Interestingly, researchers are currently speculating that agglomerates with ultrafine primary particles may have the equivalent impact on the lungs as the individual primary particles. More is known about the products of internal combustion engines, although mainly from the view point of monitoring and reducing environmental emissions. However very little information on the nature of individual particles in the workplace exists.
Ultrafine aerosols tend to be formed either through nucleation (in particular homogeneous nucleation), gas to particle reactions or through the evaporation of liquid droplets. The majority of workplace ultrafine particles are likely to arise from the nucleation route, either as combustion products, or within saturated vapours arising from other sources (e.g. welding, smelting, laser ablation). Evaporation of sub-micron and even micron sized droplets of relatively high purity solvents will result in very small particles. Where the initial particles are highly charged, there is the possibility of any resulting fine particles exceeding the Rayleigh charge limit and fragmenting into even finer particles. This is a recognised method of generating ultrafine particles through electrospraying. To what extent this generation route is present in the workplace is unknown, although it is used for the specific generation of ultrafine particles during nanofabrication. Gas to particle generation of ultrafine aerosols accounts for the majority of non-combustion particles in the environment, although again the significance of this route within the workplace is unclear.
Following current interest in nanophase technology, and the use of ultrafine particles as precursors in nanophase materials, it is likely that the next few years will see an increase in the industrial generation and use of ultrafine particles. At present the planned generation of particles tends to be isolated to the production of ultrafine metal oxides such as TiO2, ZnO and fumed silica. Ultrafine carbon black is also currently generated on a commercial scale. Although the full extent to which ultrafine aerosols are generated as an unwanted by-product within industry is still largely unknown, there are clear cases where the generation rate is high, such as in welding and from internal combustion engines. Even so, data on the nature of generated aerosols in these areas are sparse. There follows an assessment of different sources of nanoscale particles in the workplace, from welding to plastic fumes from laser cutting, and a range of other sources. This is all interesting information, but here I want to focus on the section on ultrafine aerosol precursors in nanophase technology:Over the last ten years, interest in the unique properties associated with materials having structures on a nanometer scale has been increasing at something approaching an exponential rate. By restricting ordered atomic arrangements to increasingly small volumes, materials begin to be dominated by the atoms and molecules at the surfaces of these ‘domains’, often leading to properties that are startlingly different from the bulk material. As the domains become smaller, and hence more dominated by surface atoms and surface energies, so the properties become increasingly unique from either the bulk material or the constituent atoms. So for instance, a relatively inert metal or metal oxide may become a highly effective catalyst when manufactured as ultrafine particles; opaque materials may become transparent when composed of nanoparticles, or vice versa; conductors may become insulators, and insulators conductors; nanophase materials may have many times the strength of the bulk material. All of these effects and many more have been observed with various materials. Such material properties that are unique to nanostructured materials that have excited both the scientific and industrial communities in recent years.
Most nanophase materials are fabricated either from the liquid state, or the aerosol state, although some routes combine the two. The liquid route perhaps gives more control over the process in some cases. However there is a general feeling at the present that using aerosols is an inexpensive and versatile route to constructing these materials. Although there are many different production methods being explored, the general approach is to generate, capture and process an aerosol of particles with the dimensions of the final nanostructure. Typically this requires the generation of particles from 1 to 2 nm in diameter up to around 20 – 30 nm in diameter, depending on the required properties of the final material. Generation rates in research laboratories tend to be low (of the order of mg/hour), although where industrial production of nanoparticles has commenced, production rates of the order of tonnes per hour are seen.
At present, nanophase materials are an emerging technology, with the emphasis most definitely still on the research lab. However, there is considerable commercial commitment to the field, and it is certain that as scale-up problems are overcome, the mass production of both nanoparticles and nanophase materials will increase rapidly world-wide. When this occurs, the unique health problems associated with a unique product that can neither be treated as a bulk material or on a molecular level will have to be fully addressed. In the meantime, there is a clear need to keep up to date with both developments in the technology, and any health concerns that may be associated with it. Over the past ten years, commercial-scale production of nanoscale materials has moved on significantly, although perhaps not as much as some would have predicted. Yet the issues surrounding their safety still reflect (by on large) the issues raised here.
The report summarizes the state of nanotechnology research in 1999—which I’ll skip over—and goes on to consider where the rather quaintly termed nanophase technology was heading:The indication from the scientific press is that there are as many potential applications for nanophase technology as there are groups working in the field. However a relatively small number of areas can be identified where commercial production of materials is most likely to be seen in the next 5 - 10 years. To understand the commercial pressure behind the progress of nanophase technology and its likely integration into industry, you only have to consider the potential market for successful applications. In the electronics industry in particular, the revenue arising from nanotechnology is likely to be well in excess of hundreds of billions of dollars. In other areas, such as coatings and catalysts, similar markets exist for successful applications. The market for ‘intelligent’ drug delivery systems, if successful, is likely to be immense. Reflecting this, the pharmaceutical industry is currently investing in excess of $14B per annum into advanced delivery systems.
Electronic applications
The reduction in particle size has a profound effect on electronic structure as nanometre dimensions are reached, leading to a number of unique electronic properties seen in individual and groups of nanoparticles. As an illustration, Si, which is semiconducting in the bulk solid, may be used to form nanometre sized pseudo-crystals with one of two types of atomic structure dominating its faces. Particles with one structure are fully conducting. Those with the other are good insulators. What does this mean/what are the general implications?
Perhaps the most widely recognised electronic property of nanoparticles is their ability to act as quantum dots. In arrays of such particles, the overall electronic characteristics are dominated by quantum effects within the particles, leading to novel applications. For instance, quantum dot devices can be used to create high efficiency LED’s and electroluminescent plastics. High frequency solid state lasers based on quantum dot technology are expected to form the basis of a major breakthrough in telecommunications, leading to significantly higher communication bandwidths. High speed and high capacity computer memory will also be possible using quantum dot technology. Success in fabricating viable quantum dot devices will bring about a major technological step within the electronics industry, leading to a $B production industry, although progress at present is limited by the need to fabricate very precise arrays of well characterised particles. Current approaches include the use of colloids, nanolithography and aerosols.
Porous nanostructured semiconductors such as silicon have recently been shown to have electroluminescent properties. If this can be fabricated into integrated circuits, the basis for the next generation of high speed optoelectronic computers will be laid. Nanoparticles are also being found to lead to improved properties in resistors and capacitors. Ultrafine conducting particles embedded in an insulating matrix have been shown to give a great range of resistances as well as showing very high temperature stability. Similarly, the use of nanoparticles in capacitors has been shown to give a high dielectric permitivity and a low dissipation factor, making them ideal for high speed computer memory.
A particularly interesting phenomenon seen in nanophase materials is that of electrochromism; the modification of optical properties by the application of an electric field. Windows or mirrors coated with thin layers of these materials show variable light transmittance or reflection based on the magnitude of an applied electric field. It has also been found that nanophase materials may be used to form thin transparent films with high conductivity.
A number of other important areas relating to electronics are increasingly relying on the use of nanostructured materials. Solid state gas sensors show improved sensitivity when using films of sintered nanometre particles; high temperature superconductors have a higher performance when formed of nanostructured materials; thermocouples benefit from nanostructure and the magnetic properties of some nanostructured materials is already exploited to the full in magnetic storage media.
Coatings
Using nanophase materials to coat a wide range of substrates is being explored, and has been exploited in a wide range of applications. Hard nanophase coatings are important in the construction industry. The use of coatings with specific optical properties is of interest within the glass and photographic film industries. Dry coating technology is also benefiting from nanophase materials. It has been shown that the transport properties of large particles may be radically altered by the addition of a thin coating of fine particles of a suitable material. For instance, coating starch grains with fumed silica results in a highly flowable powder. In many cases, this coating need only be of the order of nanometres thick, and the use of nanoparticles in dry coating processes is already under investigation.
Chemical-mechanical polishing using nanoparticle slurries.
Surface polishing is a critical step in the processing of silicon wafers prior to semiconductor chip fabrication. Surface blemishes are a major source of both wafer and chip rejection in the electronics industry. By using polishing slurries consisting of nanoparticles, planarisation of wafer surfaces with fewer blemishes is possible.
Drug delivery systems.
A key goal in current drug delivery system research is the development of ‘intelligent’ systems that will deliver doses to specific sites within the body. One approach being actively considered is the use of coated nanoparticles. These would be capable of penetrating capillaries and being transported directly to the target site. The coating would include the drug to be delivered, components to prevent an immune response from the body and components to achieve site-specific or condition-specific delivery.
Nanoparticle catalysts
The modified surface chemistry of nanoparticles is well recognised for its catalytic properties in many materials. This, together with the associated surface area to mass ratio for such particles, has led to intense interest in nanostructured catalysis within many fields. After laying out the state of the science regarding the potential risks of inhaling nanoscale particles (which has advanced considerably over the past ten years), the report summarises (on the health impacts):There has been little work in this field to date, so it is difficult to draw meaningful general conclusions from the published data. One of the reasons for this lack of data appears to be the difficulty in generating particles of standard and known size for use in in vitro studies. Particles used in both in vitro and in vivo studies have also tended to be relatively poorly characterised. Different effects both in vitro and in vivo have been observed with different sources of ultrafine particles, so the responses measured may be a function of the particle constituents rather than the particles per se. The differences observed have been attributed to the ability of particles with a particular composition to have different levels of free radical activity at their surface. Whilst there has been some work investigating synergy between acid aerosols and ultrafine particles (see below), there has been no work investigating the synergy between ultrafine particles and other potential airborne contaminants, e.g. allergens, VOC’s and bacteria. Some of the animal models used to demonstrate toxicological endpoints require exposure regimes which are far in excess of any possible exposure in humans (e.g. 6 hours a day, 5 days a week for 3 months). Therefore, the extrapolation of such health effect data to humans should be treated with some caution. … Interest in possible health effects following inhalation of ultrafine particles is high at present, and research is beginning to follow this interest. Inhalation toxicology has taken over from epidemiology over the past few years, and dominates the field at present. Dose response relationships in rodents are being seen that indicate particle number or surface area to be more appropriate metrics than mass. The possibility of ultrafine particles acting as vectors to transport acids and metals to the alveolar region of the lung is also being explored. However it is recognised that many of the current approaches being taken are lacking in various aspects, particularly regarding the significance of chosen endpoints and the characterisation of particle exposure, and a number of groups are now beginning to address these issues. This is an area that is particularly ripe for good research proposals to sympathetic funding bodies. The need to fully characterise the particles used in exposure and inhalation tests, as well as those that people are exposed to in the workplace and environment, is well understood, although the right combination of technical skills to achieve this seems to be lacking in many establishments. In particular there would appear to be significant scope for transferring analytical electron microscopy skills used in materials science and nanostructure analysis to the analysis of ultrafine aerosol particles. There is also a recognised need for in-vitro test systems that allow cell cultures to be exposed to the aerosol, rather than a particulate suspension. A small number of research groups are currently developing test systems allowing direct aerosol deposition. Funding for fine particle research (PM2.5 sampling, and mass-based aerosol sampling) still dominates, but all aspects of ultrafine particle research are on the increase, and it is likely that the next few years will see significant funding opportunities and research in this area. Driven by concerns over environmental exposure, together with the need to address exposure limits for nuisance dusts, there is increasing interest in examining the impact of ultrafine particle exposure in the workplace. The report covers a lot of ground on exposure measurement and control, which I won’t duplicate here (although a lot of the information remains highly pertinent). Instead, I’ll jump right to the end of the report, where a number of research recommendations are made. Remembering that these are focused specifically on inhalation exposure in the workplace, they sound surprisingly contemporary, being written 10 years ago:Full quantification of ultrafine aerosol exposure in the workplace:- Measurement of number, size, surface area, composition, morphology, structure
- Investigation of the surface properties of workplace particles.
- Investigation of surface enrichment, role of modified surface activity below 10 nm, relevance of internal structure.
- Development of instrumentation and analytical techniques for surface area
- measurement and individual particle characterisation (Analytical Electron Microscopy)
Targeted epidemiology and toxicology studies.- Epidemiological evidence for ultrafine particle toxicity in the workplace
- Toxicity of well defined particles, and of particles characteristic of those found in the workplace.
- Investigation of mechanisms resulting in toxic responses, in relation to the known physical and chemical attributes of workplace particles.
Instrumentation- Identification of deficiencies in instrumentation and monitoring requirements, and development of new technologies and methods.
Control- Reassessment of the applicability of conventional control systems (including RPE) to reduce exposure to ultrafine particles, and the development of new approaches to exposure control.
Exposure Limits- Assessment of current exposure limits in the light of available data on ultrafine particle toxicity, and the development of more appropriate approaches to exposure limits.
Ten years on, it is surprising how relevant this document still is. The major issues facing the safe use of nanomaterials were reasonably clear ten years back. And many of the research needs raised then remain today. Progress certainly has been made since then, and an understanding of the types of nanomaterials of greater concern has increased—the 1999 report doesn’t mention carbon nanotubes for instance. But on the flip side, this is a report that was clearly unencumbered by the politics of nanotechnology that seem to have diffused through things today
Perhaps most surprisingly though, is that governments and others are still talking about the same issues - often as if they have discovered them for the first time - without doing that much about them. It would be churlish to ask where we might have been now if some of those 1999 recommendations were listened to. But at least I can ask where we might be in 2019, if only we can break out of this endless cycle of re-inventing the nanotech risk report!Endnote
Because this was an internal report, I have been careful to extract only parts of it that are of general interest and are not in any sense proprietary. That said, there is a lot of information in the full report that would be helpful to anyone grappling with addressing and managing potential occupational risks arising from nanoscale particle exposure in the workplace. It would be great if the UK Health and Safety Executive could release it for public use!ReferencesCOMEAP (1996). Non-biological particles and health. HMSO Publications.
Dochery, D. W., Pope, C. A., Xu, X., Spengler, J. D., Ware, J. H., Fay, M. E., Ferris, B. G. and Speizer, F. E. (1993). An association between air pollution and mortality in six U.S. cities. N. Engl. J. Med, 329, 24, 1753-1759.
Donaldson, K. and McNee, W. (1998). The mechanics of lung injury caused by PM10. In: Air Pollution and Pealth. Eds: Hester and Harrison. Royal Society of Chemistry. ISBN 0-85404-245-8. pp21-32.
Lison, D., Lardot, C., Huaux, F., Zanetti, G. and Fubini, B. (1997). Influence of particle surface area on the toxicity of insoluble manganese dioxide dusts. Arch. Toxicol. 71, 725-729
Oberdörster, G., Gelein, R. M., Ferin, J. and Weiss, B. (1995). Association of particulate air pollution and acute mortality: involvement of ultrafine particles? Inhal. Toxicol., 7, 111-124.
Pekkanen J, Timonen KL, Ruuskanen J, Reponen A, Mirme A (1997) Effects of ultrafine and fine particles in urban air on peak expiratory flow among children with asthmatic symptoms. Environ Res 74: 24-33
Pope, C. A. (1996). Adverse health effects of air pollutants in a nonsmoking population. Toxicology, 111, 149-155.
Schlesinger, R. B. (1995). Toxicological evidence for health effects from inhaled particulate pollution: does it support the human experience? Inhal. Toxicol., 7, 99-109.
Schwartz, J., Spix, C., Wichmann, H. E. and Malin, E. (1991). Air pollution and acute respiratory illnessin five German communities. Environ. Res., 56, 1-4.
Schwartz, J., Slater, D., Larson, T. V., Pierson, W. E. and Koenig, J. Q. (1993). Particulate air pollution and hospital emergency room visits for asthma in Seattle. Am. Rev. Respir. Dis., 147, 826-831.
Seyton, A., MacNee, W., Donaldson, K. and Godden, D. (1995). Particulate air pollution and acute health effects. The Lancet, 345, 176-178.
|
|
|
NEWS
Mar 5, 2009 8:25:58 GMT 4
Post by towhom on Mar 5, 2009 8:25:58 GMT 4
'Spooky Action At A Distance' Of Quantum Mechanics Directly ObservedScienceDaily Mar. 4, 2009www.sciencedaily.com/releases/2009/03/090304091231.htmIn quantum mechanics, a vanguard of physics where science often merges into philosophy, much of our understanding is based on conjecture and probabilities, but a group of researchers in Japan has moved one of the fundamental paradoxes in quantum mechanics into the lab for experimentation and observed some of the 'spooky action at a distance' of quantum mechanics directly. Hardy's Paradox, the axiom that we cannot make inferences about past events that haven't been directly observed while also acknowledging that the very act of observation affects the reality we seek to unearth, poses a conundrum that quantum physicists have sought to overcome for decades. How do you observe quantum mechanics, atomic and sub-atomic systems that are so small-scale they cannot be described in classical terms, when the act of looking at them changes them permanently? In a journal paper published in the New Journal of Physics, "Direct observation of Hardy's paradox by joint weak measurement with an entangled photon pair," authored by Kazuhiro Yokota, Takashi Yamamoto, Masato Koashi and Nobuyuki Imoto from the Graduate School of Engineering Science at Osaka University and the CREST Photonic Quantum Information Project in Kawaguchi City, the research group explains how they used a measurement technique that has an almost imperceptible impact on the experiment which allows the researchers to compile objectively provable results at sub-atomic scales. The experiment, based on Lucien Hardy's thought experiment, which follows the paths of two photons using interferometers, instruments that can be used to interfere photons together, is believed to throw up contradictory results that do not conform to our classical understanding of reality. Although Hardy's Paradox is rarely refuted, it was only a thought experiment until recently. Using an entangled pair of photons and an original but complicated method of weak measurement that does not interfere with the path of the photons, a significant step towards harnessing the reality of quantum mechanics has been taken by these researchers in Japan. As the researchers write, "Unlike Hardy's original argument, our demonstration reveals the paradox by observation, rather than inference. We believe the demonstrated joint weak measurement is useful not only for exploiting fundamental quantum physics, but also for various applications such as quantum metrology and quantum information technology." Journal reference: Yokota K, Yamamoto T, Koashi M and Imoto N. Direct observation of Hardy's paradox by joint weak measurement with an entangled photon pair. New Journal of Physics, March 4, 2009: www.iop.org/EJ/abstract/1367-2630/11/3/033011/ Adapted from materials provided by Institute of Physics.
|
|
|
NEWS
Mar 5, 2009 8:58:12 GMT 4
Post by towhom on Mar 5, 2009 8:58:12 GMT 4
Major City frauds uncovered by policeDetectives and SFO reveal inquiry into big 'Ponzi' scheme and several 'mini-Madoffs'The Independent By Robert Verkaik and Mark Hughes Thursday, 5 March 2009www.independent.co.uk/news/uk/crime/major-city-frauds-uncovered-by-police-1637707.htmlA spate of Bernard Madoff-style scams that threaten to bring misery to thousands of investors is being investigated by police and the Serious Fraud Office, The Independent has learnt. Bogus investment schemes have been uncovered by investigators focusing on crime resulting from the credit crunch. One senior officer has called them "mini-Madoffs", a reference to the US fund manager Bernard Madoff, who is accused of profiting from a £30bn pyramid investment fraud – or Ponzi scheme – which paid investors returns from their own money, or cash paid by subsequent investors, rather than from the scheme's profits. The Serious Fraud Office (SFO) is still investigating Mr Madoff's activities in Britain. In an interview with The Independent, Richard Alderman, the director of the SFO, said he expected other alleged cases of "fraud on investors" to be made public soon. One allegedly involves a "big Ponzi" fraud, similar to that used by Mr Madoff, he added, without revealing further details of the case. The SFO is also offering advice on how to avoid falling victim to a Ponzi scam. Mr Alderman said: "Clearly, in view of our interest in Bernie Madoff and Sir Allen Stanford [the Texan financier accused of fraud], people are talking to us about red flags for hedge funds, because as the stories unravel it is very interesting to understand the structure of what happened and what could have been picked up by people through due diligence." He warned: "We are finding that people are talking to us about that and we are learning from them. We are not sharing operational detail but sometimes it is right that we feed back what we learn when we can. There is a lot more we can do on that; what kind of things due diligence could pick up." Most Ponzi schemes – named after Charles Ponzi, who became notorious for using the technique in America in the 1920s – claim to offer 20 per cent returns and collapse quickly, but Mr Madoff's returns were 10 per cent. Because he offered his investors a modest but steady and consistent income from their money, he was able to keep up his pretence for nearly 50 years. However, his scheme relied on a healthy stock market, so that depositors would be unlikely to collectively remove their money. When the world's financial markets tumbled and people did try to draw out their funds en masse, his scheme collapsed. Detective Superintendent Bob Wishard, of the City of London Police fraud squad, said: "The growing number of frauds in the City and the deepening recession has prompted speculation that Britain could soon see its first £1bn fraud. I'm not aware of anything as big as £1bn, but there are undoubtedly some huge investment frauds going on – mini-Madoffs – that, in the fullness of time, will come to our attention." Mr Alderman said the "ripple effect" of credit-crunch fraud was bringing misery to thousands. His organisation is investigating a range of financial crimes and is shortly expected to announce developments in cases involving investments, mortgages and fraudulent trading. He added: "Some of them are ones where we have been asked to look at something that has gone on, and we are conducting a preliminary investigation. With others we are digging deeper. Some of it we have identified it ourselves. At least one [case] comes from a whistleblower. We are talking about quite large-scale fraud as a result of the credit crunch." He promised to take tough action in cases that justified prosecution, and said: "This is the year in which I am expecting delivery. This calendar year is the year I want a lot of cases in the public domain out in court. Some corporates, some individuals, some cases involving individuals and corporates. I am expecting to send out some very strong messages as a result of what we are getting out into court." The SFO has conducted a review of credit-crunch fraud which has identified the scale of the problem facing regulators. One particular area of concern, Mr Alderman said, was large-scale mortgage fraud involving "professional agents" such as solicitors and surveyors. It was clear, he said, that the recession had placed huge pressures on failing businesses. "It can give rise to temptation for businesses that are in great difficulties. And what we have seen before is that there are temptations to make various assumptions in their accounts," he explained. "The obvious one is the over-recognition of revenues in the accounts: booking in year one all of the revenues that you hope to obtain from a contract over a series of years – things like that; going way beyond any prudent accounting principles. "We see that, and we see the temptation for people to keep trading when they are effectively insolvent. The result is that they are not able to succeed in doing that in a recession, and lots of people lose out."
|
|
|
NEWS
Mar 5, 2009 16:55:46 GMT 4
Post by Eagles Disobey on Mar 5, 2009 16:55:46 GMT 4
|
|
|
NEWS
Mar 5, 2009 20:47:32 GMT 4
Post by towhom on Mar 5, 2009 20:47:32 GMT 4
Welcome to Explorations in Science with Dr. Michio KakuTheoretical Physicist, Professor, Bestselling Author, Popularizer of ScienceThe Physics of Interstellar TravelTo one day, reach the stars.mkaku.org/home/?page_id=250When discussing the possibility of interstellar travel, there is something called “the giggle factor.” Some scientists tend to scoff at the idea of interstellar travel because of the enormous distances that separate the stars. According to Special Relativity (1905), no usable information can travel faster than light locally, and hence it would take centuries to millennia for an extra-terrestrial civilization to travel between the stars. Even the familiar stars we see at night are about 50 to 100 light years from us, and our galaxy is 100,000 light years across. The nearest galaxy is 2 million light years from us. The critics say that the universe is simply too big for interstellar travel to be practical. Similarly, investigations into UFO’s that may originate from another planet are sometimes the “third rail” of someone’s scientific career. There is no funding for anyone seriously looking at unidentified objects in space, and one’s reputation may suffer if one pursues an interest in these unorthodox matters. In addition, perhaps 99% of all sightings of UFO’s can be dismissed as being caused by familiar phenomena, such as the planet Venus, swamp gas (which can glow in the dark under certain conditions), meteors, satellites, weather balloons, even radar echoes that bounce off mountains. (What is disturbing, to a physicist however, is the remaining 1% of these sightings, which are multiple sightings made by multiple methods of observations. Some of the most intriguing sightings have been made by seasoned pilots and passengers aboard air line flights which have also been tracked by radar and have been videotaped. Sightings like this are harder to dismiss.) But to an astronomer, the existence of intelligent life in the universe is a compelling idea by itself, in which extra-terrestrial beings may exist on other stars who are centuries to millennia more advanced than ours. Within the Milky Way galaxy alone, there are over 100 billion stars, and there are an uncountable number of galaxies in the universe. About half of the stars we see in the heavens are double stars, probably making them unsuitable for intelligent life, but the remaining half probably have solar systems somewhat similar to ours. Although none of the over 100 extra-solar planets so far discovered in deep space resemble ours, it is inevitable, many scientists believe, that one day we will discover small, earth-like planets which have liquid water (the “universal solvent” which made possible the first DNA perhaps 3.5 billion years ago in the oceans). The discovery of earth-like planets may take place within 20 years, when NASA intends to launch the space interferometry satellite into orbit which may be sensitive enough to detect small planets orbiting other stars. So far, we see no hard evidence of signals from extra-terrestrial civilizations from any earth-like planet. The SETI project (the search for extra-terrestrial intelligence) has yet to produce any reproducible evidence of intelligent life in the universe from such earth-like planets, but the matter still deserves serious scientific analysis. The key is to reanalyze the objection to faster-than-light travel. A critical look at this issue must necessary embrace two new observations. First, Special Relativity itself was superceded by Einstein’s own more powerful General Relativity (1915), in which faster than light travel is possible under certain rare conditions. The principal difficulty is amassing enough energy of a certain type to break the light barrier. Second, one must therefore analyze extra-terrestrial civilizations on the basis of their total energy output and the laws of thermodynamics. In this respect, one must analyze civilizations which are perhaps thousands to millions of years ahead of ours. The first realistic attempt to analyze extra-terrestrial civilizations from the point of view of the laws of physics and the laws of thermodynamics was by Russian astrophysicist Nicolai Kardashev. He based his ranking of possible civilizations on the basis of total energy output which could be quantified and used as a guide to explore the dynamics of advanced civilizations: Type I: this civilization harnesses the energy output of an entire planet. Type II: this civilization harnesses the energy output of a star, and generates about 10 billion times the energy output of a Type I civilization. Type III: this civilization harnesses the energy output of a galaxy, or about 10 billion time the energy output of a Type II civilization. A Type I civilization would be able to manipulate truly planetary energies. They might, for example, control or modify their weather. They would have the power to manipulate planetary phenomena, such as hurricanes, which can release the energy of hundreds of hydrogen bombs. Perhaps volcanoes or even earthquakes may be altered by such a civilization. A Type II civilization may resemble the Federation of Planets seen on the TV program Star Trek (which is capable of igniting stars and has colonized a tiny fraction of the near-by stars in the galaxy). A Type II civilization might be able to manipulate the power of solar flares. A Type III civilization may resemble the Borg, or perhaps the Empire found in the Star Wars saga. They have colonized the galaxy itself, extracting energy from hundreds of billions of stars. By contrast, we are a Type 0 civilization, which extracts its energy from dead plants (oil and coal). Growing at the average rate of about 3% per year, however, one may calculate that our own civilization may attain Type I status in about 100-200 years, Type II status in a few thousand years, and Type III status in about 100,000 to a million years. These time scales are insignificant when compared with the universe itself. On this scale, one may now rank the different propulsion systems available to different types of civilizations: Type 0- Chemical rockets
- Ionic engines
- Fission power
- EM propulsion (rail guns)
Type I- Ram-jet fusion engines
- Photonic drive
Type II- Antimatter drive
- Von Neumann nano probes
Type IIIPropulsion systems may be ranked by two quantities: their specific impulse, and final velocity of travel. Specific impulse equals thrust multiplied by the time over which the thrust acts. At present, almost all our rockets are based on chemical reactions. We see that chemical rockets have the smallest specific impulse, since they only operate for a few minutes. Their thrust may be measured in millions of pounds, but they operate for such a small duration that their specific impulse is quite small. NASA is experimenting today with ion engines, which have a much larger specific impulse, since they can operate for months, but have an extremely low thrust. For example, an ion engine which ejects cesium ions may have the thrust of a few ounces, but in deep space they may reach great velocities over a period of time since they can operate continuously. They make up in time what they lose in thrust. Eventually, long-haul missions between planets may be conducted by ion engines. For a Type I civilization, one can envision newer types of technologies emerging. Ram-jet fusion engines have an even larger specific impulse, operating for years by consuming the free hydrogen found in deep space. However, it may take decades before fusion power is harnessed commercially on earth, and the proton-proton fusion process of a ram-jet fusion engine may take even more time to develop, perhaps a century or more. Laser or photonic engines, because they might be propelled by laser beams inflating a gigantic sail, may have even larger specific impulses. One can envision huge laser batteries placed on the moon which generate large laser beams which then push a laser sail in outer space. This technology, which depends on operating large bases on the moon, is probably many centuries away. For a Type II civilization, a new form of propulsion is possible: anti-matter drive. Matter-anti-matter collisions provide a 100% efficient way in which to extract energy from mater. However, anti-matter is an exotic form of matter which is extremely expensive to produce. The atom smasher at CERN, outside Geneva, is barely able to make tiny samples of anti-hydrogen gas (anti-electrons circling around anti-protons). It may take many centuries to millennia to bring down the cost so that it can be used for space flight. Given the astronomical number of possible planets in the galaxy, a Type II civilization may try a more realistic approach than conventional rockets and use nano technology to build tiny, self-replicating robot probes which can proliferate through the galaxy in much the same way that a microscopic virus can self-replicate and colonize a human body within a week. Such a civilization might send tiny robot von Neumann probes to distant moons, where they will create large factories to reproduce millions of copies of themselves. Such a von Neumann probe need only be the size of bread-box, using sophisticated nano technology to make atomic-sized circuitry and computers. Then these copies take off to land on other distant moons and start the process all over again. Such probes may then wait on distant moons, waiting for a primitive Type 0 civilization to mature into a Type I civilization, which would then be interesting to them. (There is the small but distinct possibility that one such probe landed on our own moon billions of years ago by a passing space-faring civilization. This, in fact, is the basis of the movie 2001, perhaps the most realistic portrayal of contact with extra-terrrestrial intelligence.) The problem, as one can see, is that none of these engines can exceed the speed of light. Hence, Type 0,I, and II civilizations probably can send probes or colonies only to within a few hundred light years of their home planet. Even with von Neumann probes, the best that a Type II civilization can achieve is to create a large sphere of billions of self-replicating probes expanding just below the speed of light. To break the light barrier, one must utilize General Relativity and the quantum theory. This requires energies which are available for very advanced Type II civilization or, more likely, a Type III civilization. Special Relativity states that no usable information can travel locally faster than light. One may go faster than light, therefore, if one uses the possibility of globally warping space and time, i.e. General Relativity. In other words, in such a rocket, a passenger who is watching the motion of passing stars would say he is going slower than light. But once the rocket arrives at its destination and clocks are compared, it appears as if the rocket went faster than light because it warped space and time globally, either by taking a shortcut, or by stretching and contracting space. There are at least two ways in which General Relativity may yield faster than light travel. The first is via wormholes, or multiply connected Riemann surfaces, which may give us a shortcut across space and time. One possible geometry for such a wormhole is to assemble stellar amounts of energy in a spinning ring (creating a Kerr black hole). Centrifugal force prevents the spinning ring from collapsing. Anyone passing through the ring would not be ripped apart, but would wind up on an entirely different part of the universe. This resembles the Looking Glass of Alice, with the rim of the Looking Glass being the black hole, and the mirror being the wormhole. Another method might be to tease apart a wormhole from the “quantum foam” which physicists believe makes up the fabric of space and time at the Planck length (10 to the minus 33 centimeters). The problems with wormholes are many:a) one version requires enormous amounts of positive energy, e.g. a black hole. Positive energy wormholes have an event horizon(s) and hence only give us a one way trip. One would need two black holes (one for the original trip, and one for the return trip) to make interstellar travel practical. Most likely only a Type III civilization would be able harness this power. b) wormholes may be unstable, both classically or quantum mechanically. They may close up as soon as you try to enter them. Or radiation effects may soar as you entered them, killing you. c) one version requires vast amounts of negative energy. Negative energy does exist (in the form of the Casimir effect) but huge quantities of negative energy will be beyond our technology, perhaps for millennia. The advantage of negative energy wormholes is that they do not have event horizons and hence are more easily transversable. d) another version requires large amounts of negative matter. Unfortunately, negative matter has never been seen in nature (it would fall up, rather than down). Any negative matter on the earth would have fallen up billions of years ago, making the earth devoid of any negative matter. The second possibility is to use large amounts of energy to continuously stretch space and time (i.e. contracting the space in front of you, and expanding the space behind you). Since only empty space is contracting or expanding, one may exceed the speed of light in this fashion. (Empty space can warp space faster than light. For example, the Big Bang expanded much faster than the speed of light.) The problem with this approach, again, is that vast amounts of energy are required, making it feasible for only a Type III civilization. Energy scales for all these proposals are on the order of the Planck energy (10 to the 19 billion electron volts, which is a quadrillion times larger than our most powerful atom smasher). Lastly, there is the fundamental physics problem of whether “topology change” is possible within General Relativity (which would also make possible time machines, or closed time-like curves). General Relativity allows for closed time-like curves and wormholes (often called Einstein-Rosen bridges), but it unfortunately breaks down at the large energies found at the center of black holes or the instant of Creation. For these extreme energy domains, quantum effects will dominate over classical gravitational effects, and one must go to a “unified field theory” of quantum gravity. At present, the most promising (and only) candidate for a “theory of everything”, including quantum gravity, is superstring theory or M-theory. It is the only theory in which quantum forces may be combined with gravity to yield finite results. No other theory can make this claim. With only mild assumptions, one may show that the theory allows for quarks arranged in much like the configuration found in the current Standard Model of sub-atomic physics. Because the theory is defined in 10 or 11 dimensional hyperspace, it introduces a new cosmological picture: that our universe is a bubble or membrane floating in a much larger multiverse or megaverse of bubble-universes. Unfortunately, although black hole solutions have been found in string theory, the theory is not yet developed to answer basic questions about wormholes and their stability. Within the next few years or perhaps within a decade, many physicists believe that string theory will mature to the point where it can answer these fundamental questions about space and time. The problem is well-defined. Unfortunately, even though the leading scientists on the planet are working on the theory, no one on earth is smart enough to solve the superstring equations. ConclusionMost scientists doubt interstellar travel because the light barrier is so difficult to break. However, to go faster than light, one must go beyond Special Relativity to General Relativity and the quantum theory. Therefore, one cannot rule out interstellar travel if an advanced civilization can attain enough energy to destabilize space and time. Perhaps only a Type III civilization can harness the Planck energy, the energy at which space and time become unstable. Various proposals have been given to exceed the light barrier (including wormholes and stretched or warped space) but all of them require energies found only in Type III galactic civilizations. On a mathematical level, ultimately, we must wait for a fully quantum mechanical theory of gravity (such as superstring theory) to answer these fundamental questions, such as whether wormholes can be created and whether they are stable enough to allow for interstellar travel.
|
|
|
NEWS
Mar 5, 2009 21:02:48 GMT 4
Post by towhom on Mar 5, 2009 21:02:48 GMT 4
Drug blocks 2 of world's deadliest emerging virusesExisting, low-cost anti-malaria treatment found effective in laboratory test tube experimentsEurekAlert Public Release: 5-Mar-2009www.eurekalert.org/pub_releases/2009-03/nyph-dbt030509.phpNEW YORK -- Two highly lethal viruses that have emerged in recent outbreaks are susceptible to chloroquine, an established drug used to prevent and treat malaria, according to a new basic science study by researchers at Weill Cornell Medical College in the Journal of Virology. Due to the study's significance, it was published yesterday, online, in advance of the first April print issue. The two henipaviruses that are the subject of the study -- Hendra Virus (HeV) and Nipah Virus (NiV) -- emerged during the 1990s in Australia and Southeast Asia. Harbored by fruit bats, they cause potentially fatal encephalitis and respiratory disease in humans, with a devastating 75 percent fatality rate. More recently, NiV outbreaks in Bangladesh involving human-to-human transmission have focused attention on NiV as a global health concern. The researchers, based in Weill Cornell's pediatrics department, were surprised by their discovery that chloroquine, a safe, low-cost agent that has been used to combat malaria for more than 50 years, is a highly active inhibitor of infection by Hendra and Nipah. "The fact that chloroquine is safe and widely used in humans means that it may bypass the usual barriers associated with drug development and move quickly into clinical trials," says Dr. Anne Moscona, professor of pediatrics and microbiology & immunology at Weill Cornell Medical College and senior author of the study. She is also vice chair for research of pediatrics at NewYork-Presbyterian Hospital/Weill Cornell Medical Center. "Chloroquine stands a good chance of making it through the development process in time to prevent further outbreaks of these deadly infections," adds Dr. Moscona. Like the avian flu, SARS, and Ebola viruses, Hendra and Nipah are zoonotic pathogens. That means they originate in certain animals but can jump between animal species and between animals and humans. There are currently no vaccines or treatments against the two henipaviruses, which are listed by the U.S. government as possible bioterror agents. Along with Dr. Moscona and her team, the study's lead author and fellow faculty member Dr. Matteo Porotto, in collaboration with Dr. Fraser Glickman at Rockefeller University's High Throughput Screening Resource Center, developed a screening test that substituted a non-lethal cow virus for the real thing. They engineered a viral hybrid, called a pseudotype, featuring proteins from the Hendra virus on its surface but lacking Hendra's genome. The pseudotype behaves in every way like its deadly counterpart, but ultimately, it only succeeds in replicating its non-lethal self. The researchers designed their screening technique specially to reflect molecular reactions at several stages of the pathogen's lifecycle. Instead of focusing exclusively on how the virus enters the cell, like other pseudotyped screening assays, explains Dr. Porotto, the researchers were able to consider how Hendra matures, buds, and exits the cell, and to screen for compounds that interfere with its development at various stages. Chloroquine does not prevent Hendra or Nipah virus from entering the cell. Instead, the chloroquine molecule appears to block the action of a key enzyme, called cathepsin L, which is essential to the virus's growth and maturation. Without this enzyme, newly formed Hendra or Nipah viruses cannot process the protein that permits the viruses to fuse with the host cell. Newly formed viruses then cannot spread the infection; in other words, they can invade, but cannot cause disease. Several other zoonotic viruses depend on cathepsin L -- most notably, Ebola. "Our findings, and our methods, could easily be applied to the study of Ebola and other emerging diseases," Dr. Porotto says. The researchers are confident that the use of this new screening strategy will build up the number of viral targets available for study and expand the antiviral research field at a time when new antivirals are desperately needed for emerging pathogens. The group anticipates collaborating on field studies in the near future, to assess the potential for efficacy of chloroquine and related compounds in Nipah-infected humans. Additional co-authors included Gianmarco Orefice, Christine Yokoyama and Michael Sganga of Weill Cornell Medical College; Dr. Bruce Mungall and Mohamad Aljofan of CSIRO Livestock Industries, Geelong, Australia; Ronald Realubit and Dr. Fraser Glickman of Rockefeller University; and Dr. Michael Whitt of the University of Tennessee Health Science Center, Memphis, Tenn. Weill Cornell Medical College Weill Cornell Medical College, Cornell University's medical school located in New York City, is committed to excellence in research, teaching, patient care and the advancement of the art and science of medicine, locally, nationally and globally. Weill Cornell, which is a principal academic affiliate of NewYork-Presbyterian Hospital, offers an innovative curriculum that integrates the teaching of basic and clinical sciences, problem-based learning, office-based preceptorships, and primary care and doctoring courses. Physicians and scientists of Weill Cornell Medical College are engaged in cutting-edge research in areas such as stem cells, genetics and gene therapy, geriatrics, neuroscience, structural biology, cardiovascular medicine, transplantation medicine, infectious disease, obesity, cancer, psychiatry and public health -- and continue to delve ever deeper into the molecular basis of disease in an effort to unlock the mysteries of the human body in health and sickness. In its commitment to global health and education, the Medical College has a strong presence in places such as Qatar, Tanzania, Haiti, Brazil, Austria and Turkey. Through the historic Weill Cornell Medical College in Qatar, the Medical College is the first in the U.S. to offer its M.D. degree overseas. Weill Cornell is the birthplace of many medical advances -- including the development of the Pap test for cervical cancer, the synthesis of penicillin, the first successful embryo-biopsy pregnancy and birth in the U.S., the first clinical trial of gene therapy for Parkinson's disease, the first indication of bone marrow's critical role in tumor growth, and most recently, the world's first successful use of deep brain stimulation to treat a minimally conscious brain-injured patient. For more information, visit www.med.cornell.edu.
|
|
|
NEWS
Mar 5, 2009 21:08:13 GMT 4
Post by towhom on Mar 5, 2009 21:08:13 GMT 4
Scientists closer to making invisibility cloak a realityEurekAlert Public Release: 5-Mar-2009www.eurekalert.org/pub_releases/2009-03/sfia-sct030509.phpJ.K. Rowling may not have realized just how close Harry Potter's invisibility cloak was to becoming a reality when she introduced it in the first book of her best-selling fictional series in 1998. Scientists, however, have made huge strides in the past few years in the rapidly developing field of cloaking. Ranked the number five breakthrough of the year by Science magazine in 2006, cloaking involves making an object invisible or undetectable to electromagnetic waves. A paper published in the March 2009 issue of SIAM Review, "Cloaking Devices, Electromagnetic Wormholes, and Transformation Optics," presents an overview of the theoretical developments in cloaking from a mathematical perspective. One method involves light waves bending around a region or object and emerging on the other side as if the waves had passed through empty space, creating an "invisible" region which is cloaked. For this to happen, however, the object or region has to be concealed using a cloaking device, which must be undetectable to electromagnetic waves. Manmade devices called metamaterials use structures having cellular architectures designed to create combinations of material parameters not available in nature. Mathematics is essential in designing the parameters needed to create metamaterials and to show that the material ensures invisibility. The mathematics comes primarily from the field of partial differential equations, in particular from the study of equations for electromagnetic waves described by the Scottish mathematician and physicist James Maxwell in the 1860s. One of the "wrinkles" in the mathematical model of cloaking is that the transformations that define the required material parameters have singularities, that is, points at which the transformations fail to exist or fail to have properties such as smoothness or boundness that are required to demonstrate cloaking. However, the singularities are removable; that is, the transformations can be redefined over the singularities to obtain the desired results. The authors of the paper describe this as "blowing up a point." They also show that if there are singularities along a line segment, it is possible to "blow up a line segment" to generate a "wormhole." (This is a design for an optical device inspired by, but distinct from the notion of a wormhole appearing in the field of gravitational physics.) The cloaking version of a wormhole allows for an invisible tunnel between two points in space through which electromagnetic waves can be transmitted. Some possible applications for cloaking via electromagnetic wormholes include the creation of invisible fiber optic cables, for example for security devices, and scopes for MRI-assisted medical procedures for which metal tools would otherwise interfere with the magnetic resonance images. The invisible optical fibers could even make three-dimensional television screens possible in the distant future. The effectiveness and implementation of cloaking devices in practice, however, are dependent on future developments in the design, investigation, and production of metamaterials. The "muggle" world will have to wait on further scientific research before Harry Potter's invisibility cloak can become a reality. The paper is co-authored by Allan Greenleaf of the University of Rochester; Yaroslav Kurylev of University College London; Matti Lassas of Helsinki University of Technology; and Gunther Uhlmann of the University of Washington. To read this article in its entirely, visit: www.siam.org/journals/sirev/51-1/71682.html. ABOUT SIAM: The Society for Industrial and Applied Mathematics (SIAM) is an international community of over 12,000 individual members, including applied and computational mathematicians, computer scientists, and other scientists and engineers. The Society advances the fields of applied mathematics and computational science by publishing a series of premier journals and a variety of books, sponsoring a wide selection of conferences, and through various other programs. More information about SIAM is available at www.siam.org.
|
|
|
NEWS
Mar 5, 2009 21:23:09 GMT 4
Post by towhom on Mar 5, 2009 21:23:09 GMT 4
Engineers ride 'rogue' laser waves to build better light sourcesNew technology presented at world's largest optical communication conference produces better sources of white lightEurekAlert Public Release: 5-Mar-2009www.eurekalert.org/pub_releases/2009-03/osoa-er030509.phpAn artist's representation of a rogue wave appearing during supercontinuum generation. Credit: UCLAWASHINGTON — A freak wave at sea is a terrifying sight. Seven stories tall, wildly unpredictable, and incredibly destructive, such waves have been known to emerge from calm waters and swallow ships whole. But rogue waves of light -- rare and explosive flare-ups that are mathematically similar to their oceanic counterparts -- have recently been tamed by a group of researchers at the University of California, Los Angeles (UCLA). UCLA's Daniel Solli, Claus Ropers, and Bahram Jalali are putting rogue light waves to work in order to produce brighter, more stable white light sources, a breakthrough in optics that may pave the way for better clocks, faster cameras, and more powerful radar and communications technologies. Their findings will be presented during the Optical Fiber Communication Conference and Exposition/National Fiber Optic Engineers Conference (OFC/NFOEC), taking place March 22-26 in San Diego. Rogue bursts of light were first spotted a year ago during the generation of a special kind of radiation called supercontinuum (SC). SC light is created by shooting laser pulses into crystals and optical fibers. Like the incandescent bulb in a lamp, it shines with a white light that spans an extremely broad spectrum. But unlike a bulb's soft diffuse glow, SC light maintains the brightness and directionality of a laser beam. This makes it suitable for a wide variety of applications -- a fact recognized by the 2005 Nobel Prize in Physics, awarded in part to scientists who used SC light to measure atomic transitions with extraordinary accuracy. Despite more than 40 years of research, SC light has proven to be difficult to control and prone to instability. Though rogue waves are not the cause of this instability, the UCLA researchers suspected that a better understanding of how noise in SC light triggers rogue waves could improve their control of this bright white light. Rogue waves occur randomly in SC light and are so short-lived that the team had to employ a new technique just to spot them. Although they are rare, they are more common than would be predicted by a bell curve distribution, governed instead by the same "L-shaped" statistics that describe other extreme events like volcanic eruptions and stock market crashes. By tinkering with the initial laser pulses used to create SC light, Solli and his team discovered how to reproduce the rogue waves, harness them, and put them to work. His results, to be presented at OFC/NFOEC 2009, demonstrate that a weak burst of light, broadcast at the perfect "tickle spot," produces a rogue wave on demand. Instead of disrupting things, it stabilizes SC light, reducing fluctuations by at least 90 percent. The seed wave also decreases the amount of energy needed to produce a supercontinuum by 25 percent. The process, says Solli, is similar to boiling water. "If you heat pure water, it can boil suddenly and explosively," he says. "But normal water has nucleation sites for bubble formation that -- like our seed waves stimulate the supercontinuum -- help the water boil smoothly with less heat." This new-and-improved white light, funded by DARPA, could help to push forward a range of technologies. Solli and Jalali are developing time-stretching devices that slow down electrical signals; such devices could be used in new optical analog-to-digital converters 1,000 times faster than current electronic versions. These converters could help to overcome the current conversion-rate bottleneck that holds back advanced radar and communication technologies. Stabilized SC light could also be used to create super-fast cameras for laboratory use or incorporated into optical clockworks. The talk, "Stimulated Supercontinuum Generation," presentation OWU7, will take place Wednesday, March 25 at 5 p.m. PDT. ABOUT OFC/NFOEC Since 1985, the Optical Fiber Communication Conference and Exposition (OFC) has provided an annual backdrop for the optical communications field to network and share research and innovations. In 2004, OFC joined forces with the National Fiber Optic Engineers Conference (NFOEC), creating the largest and most comprehensive international event for optical communications. By combining an exposition of approximately 600 companies with a unique program of peer-reviewed technical programming and special focused educational sessions, OFC/NFOEC provides an unparalleled opportunity, reaching every audience from service providers to optical equipment manufacturers and beyond.
OFC/NFOEC is managed by the Optical Society (OSA) and co-sponsored by OSA, the Institute of Electrical and Electronics Engineers/Communications Society (IEEE/ComSoc) and the Institute of Electrical and Electronics Engineers Photonics Society (formerly LEOS). Acting as non-financial technical co-sponsor is Telcordia Technologies, Inc.
|
|
|
NEWS
Mar 5, 2009 21:40:38 GMT 4
Post by towhom on Mar 5, 2009 21:40:38 GMT 4
New Deep-Sea Coral Discovered on NOAA-Supported Mission EurekAlert Public Release: 5-Mar-2009www.noaanews.noaa.gov/stories2009/20090305_coral.htmlScientists identified seven new species of bamboo coral discovered on a NOAA-funded mission in the deep waters of the Papahânaumokuâkea Marine National Monument. Six of these species may represent entirely new genera, a remarkable feat given the broad classification a genus represents. A genus is a major category in the classification of organisms, ranking above a species and below a family. Scientists expect to identify more new species as analysis of samples continues. “These discoveries are important, because deep-sea corals support diverse seafloor ecosystems and also because these corals may be among the first marine organisms to be affected by ocean acidification,” said Richard Spinrad, Ph.D., NOAA’s assistant administrator for Oceanic and Atmospheric Research. Ocean acidification is a change in ocean chemistry due to excess carbon dioxide. Researchers have seen adverse changes in marine life with calcium-carbonate shells, such as corals, because of acidified ocean water. This so-called “cauldron sponge” stands three feet tall and three feet across, and was found 4,393 below the surface. Pisces V’s manipulator arm is in the foreground, taking a sample. Analysis is not yet complete on the cauldron sponge, but scientists expect it will turn out to be a new species. High resolution (Credit: Hawaii Deep-Sea Coral Expedition 2007/NOAA): www.noaanews.noaa.gov/stories2009/images/cauldron_sponge.jpg“Deep-sea bamboo corals also produce growth rings much as trees do, and can provide a much-needed view of how deep ocean conditions change through time,” said Spinrad. Rob Dunbar, a Stanford University scientist, was studying long-term climate data by examining long-lived corals. “We found live, 4,000-year-old corals in the Monument – meaning 4,000 years worth of information about what has been going on in the deep ocean interior.” “Studying these corals can help us understand how they survive for such long periods of time, as well as how they may respond to climate change in the future,” said Dunbar. Among the other findings were a five-foot tall yellow bamboo coral tree that had never been described before, new beds of living deepwater coral and sponges, and a giant sponge scientists dubbed the “cauldron sponge,” approximately three feet tall and three feet across. Scientists collected two other sponges which have not yet been analyzed, but may represent new species or genera as well. This five-foot tall yellow bamboo coral, standing in 4,787 feet of water, represents a new species and establishes a new genus of bamboo corals. High resolution (Credit: Hawaii Deep-Sea Coral Expedition 2007/NOAA): www.noaanews.noaa.gov/stories2009/images/yellow_tree_newgenus.png The mission also discovered a “coral graveyard” covering about 10,000 square feet on a seamount’s summit, more than 2,000 feet deep. Scientists estimated the death of the community occurred several thousand to potentially more than a million years ago, but did not know why the community died. The species of coral had never been recorded in Hawaii before, according a Smithsonian Institution coral expert they consulted. Finding new species was not an express purpose of the research mission, but Dunbar and Christopher Kelley, a scientist with the University of Hawaii, both collected specimens that looked unusual. Kelley’s objective was to locate and predict locations of high density deep-sea coral beds in the Monument. NOAA scientist Frank Parrish also led a portion of the mission, focusing on growth rates of deep-sea corals. The three-week research mission ended in November 2007, but analysis of specimens is ongoing. “The potential for more discoveries is high, but these deep-sea corals are not protected everywhere as they are here, and can easily be destroyed,” said Kelley. This orange bamboo coral is another new species and new genus found in the Papahanaumokuakea Marine National Monument. It is between four and five feet tall, and was found 5,745 feet below the surface. High resolution (Credit: Hawaii Deep-Sea Coral Expedition 2007/NOAA): www.noaanews.noaa.gov/stories2009/images/orangefanlike_newgenus.png The Papahânaumokuâkea Marine National Monument has more deep water than any other U.S. protected area, with more than 98 percent below SCUBA-diving depths and only accessible to submersibles. The Hawaii Undersea Research Laboratory, sponsored by NOAA and the University of Hawaii, piloted the Pisces V submersible from a research vessel to the discovery sites, between 3,300 and 4,200 feet deep. Funding for the mission was provided by NOAA through the Papahânaumokuâkea Marine National Monument and NOAA’s Office of Ocean Exploration and Research. Identification of the corals was provided by Les Watling at the University of Hawaii. The Papahânaumokuâkea Marine National Monument is administered jointly by the Department of Commerce, Department of the Interior, and the State of Hawaii and represents a cooperative conservation approach to protecting the entire ecosystem. NOAA’s Office of Ocean Exploration explores the Earth’s largely unknown ocean for the purpose of discovery and the advancement of knowledge, using state-of-the-art technologies. NOAA understands and predicts changes in the Earth's environment, from the depths of the ocean to the surface of the sun, and conserves and manages our coastal and marine resources.
|
|
|
NEWS
Mar 6, 2009 1:58:54 GMT 4
Post by towhom on Mar 6, 2009 1:58:54 GMT 4
Two Food Additives Have Previously Unrecognized Estrogen-like EffectsScienceDaily Mar. 5, 2009www.sciencedaily.com/releases/2009/03/090302125924.htmScientists in Italy are reporting development and successful use of a fast new method to identify food additives that act as so-called "xenoestrogens" — substances with estrogen-like effects that are stirring international health concerns. They used the method in a large-scale screening of additives that discovered two additives with previously unrecognized xenoestrogen effects. In the study, Pietro Cozzini and colleagues cite increasing concern about identifying these substances and about the possible health effects. Synthetic chemicals that mimic natural estrogens (called "xenoestrogens," literally, "foreign estrogens") have been linked to a range of human health effects. They range from reduced sperm counts in men to an increased risk of breast cancer in women. The scientists used the new method to search a food additive database of 1,500 substances, and verified that the method could identify xenoestrogens. In the course of that work, they identified two previous unrecognized xenoestrogens. One was propyl gallate, a preservative used to prevent fats and oils from spoiling. The other was 4-hexylresorcinol, used to prevent discoloration in shrimp and other shellfish. "Some caution should be issued for the use of propyl gallate and 4-hexylresocrinol as food additives," they recommend in the study. Journal reference: Alessio Amadasi et al. Identification of Xenoestrogens in Food Additives by an Integrated in Silico and in Vitro Approach. Chemical Research in Toxicology, 2009; 22 (1): 52 DOI: 10.1021/tx800048m Adapted from materials provided by American Chemical Society.
|
|
|
NEWS
Mar 6, 2009 2:49:33 GMT 4
Post by towhom on Mar 6, 2009 2:49:33 GMT 4
Simple Device Can Ensure Food Gets To The Store Bacteria FreeScienceDaily Mar. 5, 2009www.sciencedaily.com/releases/2009/03/090302183323.htmKevin Keener's in-bag ozonation method creates ozone in packaged foods by using high-voltage coils to charge the gas inside sealed food packages, effectively killing any bacteria inside them. In this demonstration with a bag of tomatoes, helium has been added to a plastic bag because it glows, showing the ionization process. (Credit: Purdue Agricultural Communication photo/Tom Campbell)A Purdue University researcher has found a way to eliminate bacteria in packaged foods such as spinach and tomatoes, a process that could eliminate worries concerning some food-borne illnesses. Kevin Keener designed a device consisting of a set of high-voltage coils attached to a small transformer that generates a room-temperature plasma field inside a package, ionizing the gases inside. The process kills harmful bacteria such as E. coli and salmonella, which have caused major public health concerns. Keener's process is outlined in an article released online early in LWT - Food Science and Technology, a journal for the Swiss Society of Food and Technology and the International Union of Food Science and Technology. "Conceptually, we can put any kind of packaged food we want in there," said Keener, an associate professor in the Department of Food Science. "So far, it has worked on spinach and tomatoes, but it could work on any type of produce or other food." By placing two high-voltage, low-watt coils on the outside of a sealed food package, a plasma field is formed. In the plasma field, which is a charged cloud of gas, oxygen has been ionized and turned into ozone. Treatment times range from 30 seconds to about five minutes, Keener said. Ozone kills bacteria such as E. coli and salmonella. The longer the gas in the package remains ionized, the more bacteria that are killed. Eventually, the ionized gas will revert back to its original composition. The process uses only 30-40 watts of electricity, less than most incandescent light bulbs. The outside of the container only increases a few degrees in temperature, so its contents are not cooked or otherwise altered. Other methods of ozone treatment require adding devices to bags before sealing them to create ozone or pumping ozone into a bag and then sealing it. Keener's method creates the ozone in the already sealed package, eliminating any opportunity for contaminants to enter while ozone is created. "It's kind of like charging a battery. We're charging that sample," Keener said. "We're doing it without electrode intrusion. We're not sticking a probe in the package. We can do this in a sealed package." Keener said testing has worked with glass containers, flexible plastic-like food-storage bags and rigid plastics, such as strawberry cartons and pill bottles. He said the technology also could work to ensure pharmaceuticals are free from bacteria. According to the Centers for Disease Control and Prevention, about 40,000 cases of Salmonellosis, an infection caused by salmonella, are reported each year in the United States, causing 400 deaths. The CDC reports that about 70,000 E. coli infections are reported each year, causing dozens of deaths. Funding for Keener's research came from Purdue Agriculture. A patent on the technology is pending. Keener said the next step is to develop a commercial prototype of the device that could work on large quantities of food. Adapted from materials provided by Purdue University.
|
|