Feeds:
Posts
Comments

Defining electrical units of measurement in terms of universal constants allows precise standards to be established. Both the unit of volt and ohm can be defined from the elementary charge e and the Planck constant by exploiting the Josephson effect and the quantum Hall effect, respectively.

However, an equivalent, robust standard for the ampere is still lacking. One proposal is to use single-electron pumps i.e. quantum devices that shuffle electrons one at a time with a certain frequency f , so that the standard of current can be defined from the product of the elementary charge and the frequency (ef).The drawback is that these devices operate in the tunnelling regime, whose stochastic nature results in fluctuations of the measured current from the value ef.

Scientists at the Physikalisch-Technische Bundesanstalt Institute have now experimentally demonstrated a device configuration that can overcome this problem. They have implemented a series of three single-electron pumps and two charge detectors, which monitor the flow of electrons across the pumps. Single electrons are shuffled across by applying voltage pulses to each pump in a certain sequence. Then, subsequent pulses allow the detection of pumping errors, that is, of events in which a pump fails to shuffle an electron. The knowledge of these errors allows, in turn, the current fluctuations to be determined from ef, and eventually to achieve a tenfold improvement in accuracy compared with the case of individual electron pumps.

Single Electron Source

Figure: SEM image of the device. The semiconductor part between source and drain (green) consists of three pumps and two charge nodes (blue, red). Each pump is
defined by three metallic top gates (yellow) forming a QD in the semiconductor.

 

Article @ DOI: 10.1103/PhysRevLett.112.226803

Edited and Extracted from Reliable single-electron source by Elisa De Ranieri

Finding a substrate material for solar cells that simultaneously provides a high optical transparency and a high transmission haze is challenging. It now appears that an engineered paper could be an ideal substrate. Scientists have developed a wood-fibre-based nanostructured paper that provides a transparency of ~96% and a haze of ~60%. This material is potentially useful for photo-voltaics, where it could reduce the angular dependence of light harvesting for solar cells, and it could also benefit outdoor displays by reducing glare and specular reflections of sunlight. (Initial demonstration can be seen in Nano Lett. 14, 765773; 2014).

The team produced the transparent paper by using an oxidation process called TEMPO to introduce carboxyl groups into the cellulose fibres of wood. This process weakens the hydrogen bonds between the cellulose fibrils, causing the wood fibres to swell. The result is a paper with a much higher packing density than usual and greatly improved optical transparency and haze.

Analysis by scanning electron microscopy revealed that the transparent paper has a homogenous surface as a result of voids being filled by small fibre fragments. In the spectral range of 400–1,100 nm, the transparent paper had a transmittance of ~96% and a transmission haze of ~60%.

The benefits of the enhanced haze of the transparent paper for photo-voltaic devices were demonstrated by laminating the paper to the top of an organic solar cell and measuring the cell’s photo-current as a function of the incident angle of white light.

According to the authors, the improvements can be explained by two factors i.e. the reduced reflection of the light due to the low index contrast between the top layer of the photo-voltaic device and the transparent paper, and the directional change of the incident light in the transparent paper.

Scientists are pushing for further results regarding efficiency of Solar Cells by modifying them with paper developed using this technology. With promising initial results, can this technology will be helpful in decreasing the efficiency bottleneck of the Solar Cells – we have to wait.

 

Edited and Extracted from article by Noriaki Horiuchi in Nature  (doi:10.1038/nphoton.2014.43)

 

 

At the start of 2013, the European Union awarded one of the two Future and Emerging Technology ‘flagship’ initiatives to the Human Brain Project (the other one going to a project focused on graphene). Almost simultaneously, President Barack Obama announced the BRAIN (Brain Research through Advancing Innovative Neurotechnologies) Initiative in the US.

As one of the flagships initiatives, the Human Brain Project is due to receive a staggering €1 billion over 10 years, half of which will come from the European Union and other half from the funding agencies of the individual countries involved. It is a large collaboration involving over 100 partners, and €72 million (~US$98 million) will be awarded during the first 30 months alone. The principal goals of the project are to simulate the activity of a human brain using supercomputers and to use the knowledge obtained to improve the way computers work.

The BRAIN initiative originated from a call from a large number of scientists to launch a collaborative effort, which was named the Brain Activity Map project, to record and analyse the activity of large sets of neurons in the brain. This call was answered by the White House who backed it with the promise of several hundred million dollars of public funding over the next few years and called for support from private investors. So far, about US$110 million have been committed by the Defense Advanced Research Projects Agency, National Institutes of Health and National Science Foundation for the first year, and private investors have promised around US$130 million for each of the next few years.

The initiatives originate from two simple facts. First, that our current understanding of how the brain works is very poor, which hampers the discovery of effective cures for mental health diseases. Second, that the neuron network in the human brain is extremely vast and complex. Understanding the way in which signals are transmitted and how these transmissions translate into thoughts and sensations can only be achieved through large collaborations, which are able to produce and analyse huge sets of data. It is, therefore, no coincidence that the BRAIN initiative has been compared with endeavors such as the Human Genome Project and even the Apollo project that landed a man on the Moon.

 Extracted from doi:10.1038/nnano.2014.23

The Flame Lens

Image

Scientists have developed a hot-gas lens that possesses improved optical capabilities and damage threshold (estimated to be vastly superior) to that of conventional glass optics. Once optimized, such lenses may be useful for focusing ultra-intense laser beams such as those used in X-ray lasers, laser-driven accelerators and laser fusion experiments.

The lens is capable of transmitting beams whose intensities are two orders of magnitude higher than the maximum intensity that solid-state lenses can transmit without sustaining damage. In case of breakdown, the lens repairs instantaneously, unlike solid-state optics, which are either permanently impaired or must be left to cool for hours.

The idea of using a hot metal tube to create a temperature gradient and thus a lens-like refractive index profile in a gas has been around for some time. Bell Laboratories in the USA investigated the idea in the 1960s not long after the development of the first lasers. Early designs were plagued by severe limitations in terms of their large size, high complexity and weak focusing. Their apertures were small (of the order of 7 mm), their focal lengths were very long (2.5 m to 10 m) and they required complicated ancillary apparatus. These were of the order of a meter in length, and were thus long and bulky.

To address these issues, the scientists designed a composite gas lens that consists of two parts. The first stage is a 50-mm-long metal-tube gas lens with a 10-mm-diameter aperture that is heated from below and refracts the outer rays of a light beam. The second stage is a shorter tube, 25 mm in length; it contains a spiral flame that mainly acts on the inner rays. The stainless-steel tubes of both lenses are heated to around 400 °C so that they become red hot. The result is a flame lens, which brings light to a sharp focus and is more compact and has a focusing power that is four times stronger per unit length than earlier gas-lens designs.

A prototype flame lens with a focal length of about 2 m in proof-of-principle experiments that include focusing of high-intensity light, imaging of highly chromatic sources and drilling plastic with high-energy pulses. Scientists are now using aerodynamic theory to optimize the lens structure and further improve its performance.

Articles taken and edited from : Graydon, Oliver. “Optics: The flame lens.” Nature Photonics 7.8 (2013): 592-592.

Researchers at Georgia Tech and MIT have developed a way to automate the process of finding and recording information from neurons in the living brain. The researchers have shown that a robotic arm guided by a cell-detecting computer algorithm can identify and record from neurons in the living mouse brain with better accuracy and
speed than a human experimenter. Using this technique, scientists could classify the thousands of different types of cells in the brain, map how they connect to each other, and figure out how diseased cells differ from normal cells.

Reference: S. Kodandaramaiah, G. Franzesi, B. Chow, E. Boyden, C.R. Forest, Automated whole-cell patch clamp electrophysiology of neurons in vivo. Nature Methods, Vol. 9(6), p. 585-587, May 2012. (www.nature.com/nmeth/journal/v9/n6/abs/­nmeth.1993.html)

Researchers at Georgia Tech and the McGovern Institute for Brain Research at MIT (http://mcgovern.mit.edu/) have developed a way to automate the process of finding and recording information from neurons in the living brain. The researchers have shown that a robotic arm guided by a cell-detecting computer algorithm can identify and record from neurons in the living mouse brain with better accuracy and speed than a human experimenter. Using this technique, scientists could classify the thousands of different types of cells in the brain, map how they connect to each other, and figure out how diseased cells differ from normal cells

Blind Mice, No Longer

In a study published on April 19, 2011 in the journal Molecular Therapy, researchers at the McGovern Institute for Brain Research at MIT and the University of Southern California used optogenetic technology to restore vision in blind mice.

Images and footage courtesy of the McGovern Institute for Brain Research at MIT, Ed Boyden, Alan Horsager, University of Southern California, Eos Neuroscience, and pond5.com

See the original video and more on MIT TechTV – http://techtv.mit.edu/videos/12312