Feeds:
Posts
Comments

Posts Tagged ‘science’

Defining electrical units of measurement in terms of universal constants allows precise standards to be established. Both the unit of volt and ohm can be defined from the elementary charge e and the Planck constant by exploiting the Josephson effect and the quantum Hall effect, respectively.

However, an equivalent, robust standard for the ampere is still lacking. One proposal is to use single-electron pumps i.e. quantum devices that shuffle electrons one at a time with a certain frequency f , so that the standard of current can be defined from the product of the elementary charge and the frequency (ef).The drawback is that these devices operate in the tunnelling regime, whose stochastic nature results in fluctuations of the measured current from the value ef.

Scientists at the Physikalisch-Technische Bundesanstalt Institute have now experimentally demonstrated a device configuration that can overcome this problem. They have implemented a series of three single-electron pumps and two charge detectors, which monitor the flow of electrons across the pumps. Single electrons are shuffled across by applying voltage pulses to each pump in a certain sequence. Then, subsequent pulses allow the detection of pumping errors, that is, of events in which a pump fails to shuffle an electron. The knowledge of these errors allows, in turn, the current fluctuations to be determined from ef, and eventually to achieve a tenfold improvement in accuracy compared with the case of individual electron pumps.

Single Electron Source

Figure: SEM image of the device. The semiconductor part between source and drain (green) consists of three pumps and two charge nodes (blue, red). Each pump is
defined by three metallic top gates (yellow) forming a QD in the semiconductor.

 

Article @ DOI: 10.1103/PhysRevLett.112.226803

Edited and Extracted from Reliable single-electron source by Elisa De Ranieri

Advertisements

Read Full Post »

Finding a substrate material for solar cells that simultaneously provides a high optical transparency and a high transmission haze is challenging. It now appears that an engineered paper could be an ideal substrate. Scientists have developed a wood-fibre-based nanostructured paper that provides a transparency of ~96% and a haze of ~60%. This material is potentially useful for photo-voltaics, where it could reduce the angular dependence of light harvesting for solar cells, and it could also benefit outdoor displays by reducing glare and specular reflections of sunlight. (Initial demonstration can be seen in Nano Lett. 14, 765773; 2014).

The team produced the transparent paper by using an oxidation process called TEMPO to introduce carboxyl groups into the cellulose fibres of wood. This process weakens the hydrogen bonds between the cellulose fibrils, causing the wood fibres to swell. The result is a paper with a much higher packing density than usual and greatly improved optical transparency and haze.

Analysis by scanning electron microscopy revealed that the transparent paper has a homogenous surface as a result of voids being filled by small fibre fragments. In the spectral range of 400–1,100 nm, the transparent paper had a transmittance of ~96% and a transmission haze of ~60%.

The benefits of the enhanced haze of the transparent paper for photo-voltaic devices were demonstrated by laminating the paper to the top of an organic solar cell and measuring the cell’s photo-current as a function of the incident angle of white light.

According to the authors, the improvements can be explained by two factors i.e. the reduced reflection of the light due to the low index contrast between the top layer of the photo-voltaic device and the transparent paper, and the directional change of the incident light in the transparent paper.

Scientists are pushing for further results regarding efficiency of Solar Cells by modifying them with paper developed using this technology. With promising initial results, can this technology will be helpful in decreasing the efficiency bottleneck of the Solar Cells – we have to wait.

 

Edited and Extracted from article by Noriaki Horiuchi in Nature  (doi:10.1038/nphoton.2014.43)

 

 

Read Full Post »

At the start of 2013, the European Union awarded one of the two Future and Emerging Technology ‘flagship’ initiatives to the Human Brain Project (the other one going to a project focused on graphene). Almost simultaneously, President Barack Obama announced the BRAIN (Brain Research through Advancing Innovative Neurotechnologies) Initiative in the US.

As one of the flagships initiatives, the Human Brain Project is due to receive a staggering €1 billion over 10 years, half of which will come from the European Union and other half from the funding agencies of the individual countries involved. It is a large collaboration involving over 100 partners, and €72 million (~US$98 million) will be awarded during the first 30 months alone. The principal goals of the project are to simulate the activity of a human brain using supercomputers and to use the knowledge obtained to improve the way computers work.

The BRAIN initiative originated from a call from a large number of scientists to launch a collaborative effort, which was named the Brain Activity Map project, to record and analyse the activity of large sets of neurons in the brain. This call was answered by the White House who backed it with the promise of several hundred million dollars of public funding over the next few years and called for support from private investors. So far, about US$110 million have been committed by the Defense Advanced Research Projects Agency, National Institutes of Health and National Science Foundation for the first year, and private investors have promised around US$130 million for each of the next few years.

The initiatives originate from two simple facts. First, that our current understanding of how the brain works is very poor, which hampers the discovery of effective cures for mental health diseases. Second, that the neuron network in the human brain is extremely vast and complex. Understanding the way in which signals are transmitted and how these transmissions translate into thoughts and sensations can only be achieved through large collaborations, which are able to produce and analyse huge sets of data. It is, therefore, no coincidence that the BRAIN initiative has been compared with endeavors such as the Human Genome Project and even the Apollo project that landed a man on the Moon.

 Extracted from doi:10.1038/nnano.2014.23

Read Full Post »

In this story, there are two protagonists: A Photon and an Electron. The signal/optical communication devices uses Photons (light) and signal processing devices uses Electrons. The conversion of optics into electronics and vice versa slows down the effective communication. Recently, a new transistor has been proposed and experimentally demonstrated by Leonid V. Butov and Arthur C. Gossard (University of California) which processes signals by emitting light.

Signal Processing is mainly carried out before transmitting and receiving any information. It is mainly done using semi-conductor integrated circuits. These miniature integrated circuits are built using transistors which currently uses electrons for signal processing. These electrons and photons don’t interact directly with each other. This presents a major bottle neck in the modern day communication and signal processing. The direct use of light, without its conversion, would speed up both computation and communication.

The transistor proposed is based on Gallium Arsenide (GaAs) and processes signals using indirect EXCITONS instead of electrons. These excitons are controlled by gate electrodes just like in silicon transistors (standard field effect transistors i.e. FETs) and can be easily coupled with the photons. This results in faster signal transmission to other optically connected on chip and off chip devices. The computation power advantage is not great as compared to the communication one.

Excitons are electron-hole pairs, bound by the attractive force between negatively charged electrons and positively charged holes. Because of this force, excitons tend to recom­bine fast, releasing a flash of light. The lifetime can be increased by up to ten microseconds when confining electrons and holes in spatially separated layer forming and indirect exciton. The excitons exhibits stable characteristics at low temperature operation (below 40K) but dissociate easily at higher temperatures. The real time applications require stable operation at room temperature and above. This question still needs to be addressed. There is no inorganic semiconductor material available so far that allows stable exciton population at room temperature. Although materials such as ZnSe, CdTe and GaN may survive exciton populations at room temperature but that requires extremely narrow spatial separation between electron hole pairs of the excitons. This stable operation of excitons remains a bottle neck for the fabrication of high quality exciton based ICs (EXICs).

The processing of excitons require transfer of energy before their decay. This limits the number of transistors integrated on a chip which is a crucial condition for computation. The coupling excitons don’t have a long lifetime (few nanoseconds) and a small propagation distance. The excitons having larger lifetime reveals poor coupling with light. This results in another obstacle in the fabrication of real time operating devices.

The success of this technology depends on how these open questions are to be addressed. Optical Exciton based transistors can be a reality and a paradigm shift if we would be able to find materials and methods for room temperature operation. It could easily pave  the way for technological revolution. All I would say is Conventional Solid-State Optoelectronics still has a huge intrinsic potential for further development.

Extracted and Summarized from “Will Excitonic Circuits Change Our Lives?” in Optics and Photonics Focus published in August 2008

Read Full Post »