Holographic data storage records information in the form of volumetric gratings within the volume of the recording medium [8]. In digital holographic storage, information is typically encoded in the form of 2D pixelated images (data pages) that include 2D modulation coding, data shuffling, and error correction [5, 6]. A multitude of (up to several thousand) holograms can be recorded in virtually the same volume through angular, wavelength, or shift-and-correlation multiplexing techniques (due to the selective properties of the gratings) [10]. Unlike conventional storage techniques, holographic data storage uses the entire volume, rather than just the surface, of the recording medium, greatly increasing potential data density.
The page organization of such memory provides tremendous parallelism, resulting in far greater data-transfer rates than in conventional optical or magnetic storage. As many as a million bits can be read in parallel, compared to only one (or several) bits in conventional optical or magnetic storage. Very high data density and transfer rates available through holographic storage have thus become the focus of intense research and development in the past 20 years.
Here, I emphasize recent experiments and technology demonstrations in digital holographic data storage as part of the Defense Advanced Research Projects Agency (DARPA) photorefractive information storage materials consortium (PRISM) and the Holographic Data Storage System consortium (HDSS). Much progress has been made in recent years on the development of new and improved holographic media, spatial light modulators, and detector arrays, as well as in the opto-mechanical components needed for high-performance system implementations. As a result, impressive demonstrations have been achieved within both consortia, as well as at other laboratories around the world.
As many as a million bits can be read in parallel, compared to only one (or several) bits in conventional optical or magnetic storage.
In the 1970s, fundamental materials and systems work was carried out for HDSS at RCA [1, 11], the Soviet Academy of Science [2], Bell Laboratories [12], and other industrial and university laboratories around the world [3]. New and improved components were demonstrated and incorporated into complex systems. However, at that time, there was neither strong market demand nor a cost-competitive system demonstration to drive large-scale commercial development. Then in the mid-1980s, work at the California Institute of Technology, Microelectronics and Computer Corp., Northrop-Grumman Research Center [10], Stanford University, and other laboratories spurred renewed interest in holographic storage [6]. The primary drivers this time were components newly developed for consumer applications, as well as growing commercial demand for large-capacity storage devices.
It was because holographic recording material was not being developed for consumer applications that some of the field’s leading researchers established PRISM in 1994 and HDSS a year later. PRISM would focus on fundamental materials issues; the new components developed through HDSS would be integrated into systems delivering 100GB capacity and a raw data-transfer rate of more than 1Gb/s. (At the time, 19945, Lambertus Hesselink of Stanford University was the principal investigator and Glenn Sincerbox of the University of Arizona was the co-principal investigator of the DARPA/NSIC/Industry/University consortia on PRISM and HDSS.)
Useful capacity, transfer rate, and access time are the key measures of the performance of a holographic data storage system. Moreover, because any storage system should never lose data, the system’s corrected bit-error rate should be 10121015 (bit-error probability). To be competitive in the market for large storage systems, a holographic data storage drive has to be cost-competitive while delivering improved performance over other conventional optical and magnetic drives. The exception could be certain niche markets, where the special attributes of holographic storage—all-solid-state implementation, extremely short access times, extremely high transfer rates—are attractive or critical. High-density data storage and extremely fast data-transfer rates make holographic storage a candidate technology for extending the DVD road map beyond shorter-wavelength blue laser technology and its resulting increase in bit density.
Optical Architectures
Information is recorded holographically by superimposing two mutually coherent beams inside a photosensitive medium. One beam carries information; the other is a plane wave reference beam containing information for page indexing. In the region where the information beam and the reference beam overlap, an interference pattern is formed. Since the medium is thick compared to optical wavelengths, multiple holograms can be superimposed inside the same medium without causing crosstalk by slightly rotating the reference beam in subsequent recordings. This superimposition is an advantage for boosting data storage density compared to surface recording.
The data-bearing beam contains an image of a bit pattern imprinted onto the beam using a spatial light modulator (SLM), which is like a liquid crystal display device made up of hundreds of thousands of pixels. Spatial on-off patterns encode bit patterns. Because many bits are recorded and read out in parallel, holographic data storage provides very high recording and read-out data rates.
Another advantage of such systems is that information can be retrieved based on partial information about the content of the stored data. Instead of recalling a whole data page by shining the reference beam onto the medium, a partial image-bearing object beam can be used to produce a reconstructed reference beam that in turn provides the address of the stored information content most closely correlated with the partial input information.
This so-called “associate” data retrieval is significant for image-based datasets because they are difficult to process using relational database techniques. As a whole volume of data is searched simultaneously, very fast data search rates are achieved. Under experimental conditions, rates up to 100Gb/s have been demonstrated at IBM and at Stanford employing recent advances in storage materials and optical and electrical components.
Broadly speaking, holographic data-storage materials are divided into two classes: thin (a few hundred microns) photosensitive organic media; and thick (a few millimeters to centimeters) inorganic photorefractive crystals. The thin media lend themselves to transmission-type architectures using a variety of shift-multiplexing or phase-multiplexing techniques; thick media are most suitable for angular multiplexing, typically in the 90-degree geometry. Although phase- and wavelength-multiplexing techniques have been investigated thoroughly over the past 20 years, the lasers and phase masks required for today’s holographic systems are still not suitably developed to achieve large data densities [6].
For example, consider the 90-degree geometry of holographic storage using a photorefractive medium (see Figure 1). The data-bearing object beam and the reference beam are incident upon a thick photorefractive medium from orthogonal directions. Intersecting plane signal and reference beams generate a sinusoidal interference pattern with regions of high intensity and low intensity. Photons excite electrons into the conduction band in higher concentrations within the regions of constructive interference compared to those within the regions of destructive interference. Electrons diffuse in the direction of the concentration gradient and drift (due to the photovoltaic effect) to the regions of destructive interference where they relax to empty states in the energy-band gap. The resulting space-charge field modulates the index of refraction of the photorefractive recording material (such as LiNbO3, or lithium niobate) due to Pockels effect, or the change in index of refraction under an applied electric field. This periodic index of refraction modulation itself represents a volume-phase hologram, which can be used for storing information. Multiple holograms can share the same volume of the media and be read out selectively due to high Bragg, or defraction, selectivity of volumetric phase gratings.
During readout with a uniform beam, the space-charge field is erased through the same mechanism. Prolonged readout requires breaking the symmetry of the read-write process. For the purposes of prolonged readout, researchers have investigated what are called ionic fixing procedures in which the medium temperature is elevated during recording to make hydrogen ions mobile [1]. Charge screening between electrons and ions reduces the resulting space charge field. During readout at room temperature, the electron space charge field is equalized, and the ionic space charge field is revealed. Because the ions are practically immobile at room temperature, readout is permanent and affected only by so-called dark conductivity (zero light intensity).
An alternative to ionic hologram fixing is two-photon gated recording, whereby the material becomes sensitive to the recording light (such as red) but only in the presence of another, gating, beam of different (usually shorter) wavelength. Two-photon recording techniques have been investigated since the early days of holographic memories, particularly at Bell Laboratories [12]. The main difficulty with that early work was the low sensitivity of the medium and the high power of the lasers needed to record data in lithium niobate. More, recently, researchers at Stanford University have overcome this problem by modifying the medium’s composition, getting it closer to ideal chemical composition (more stoichiometric), adding iron and other dopants, and combining it with optimized oxidation and reduction post-growth treatments [7]. As a result, the material’s sensitivity has been improved by several orders of magnitude over previous results, making the medium approximately as sensitive as iron-doped lithium niobate in green recording light. At an Li2O concentration of 49.9%, the dynamic range of the material is quite high, with measured values of M# (the sum of the diffraction efficiencies square roots of all superimposed holograms) ~10. This wide dynamic range is significant for data-storage applications, as it provides a dynamic range wide enough for superimposing holograms. The single-photon ungated sensitivity of these materials in the near infrared (responsible for erasure upon reading the information) is quite small, thus providing a means for prolonged readout.
The second architecture for building holographic systems widely investigated over the past 10 years is built around a rotating disk in which the medium is a photopolymer sandwiched between two transparent glass substrates (see Figure 2). Typically, the photopolymer medium has a thickness of a few hundred microns. Shift multiplexing was developed for hologram superimposition in a moving medium by Andrei Mikaelian and his group at the Soviet Academy of Science (now the Russian Academy of Science) [2] and later extended by Demetris Psaltis and his group at Caltech.
Because moving the medium with respect to a fixed spherical reference wave and the object wave violates the Bragg condition (defraction on volumetric gratings), a new hologram can be recorded with little crosstalk simply by shifting the medium by several tenths of a micron. Since the spherical reference wave provides little selectivity, this multiplexing method does not provide very high recording densities. Modifying the phase front of the reference beam using a random phase mask helps achieve excellent selectivity. Engineering the magnitude and the correlation function of the phase mask helps modify the decorrelation distance of the reference wave. With the random phase mask in the reference beam, more than 100 holograms have been superimposed in the Stanford system, even in the thin (200µm) media with good diffraction efficiency and low crosstalk. These results indicate that densities greater than 60b/µm2can be achieved with speckle correlation shift multiplexing (to shift media by a few microns), using a random or pseudo-random phase mask as the reference beam generator.
System optimization is a complex problem involving a large number of trade-offs. Especially for HDSS, the trade-off between capacity and transfer rate is different from other storage systems. Fundamentally, holographic data storage is based on multiplexing many holograms in the same volume of the recording medium. For media with a linear response, this assumption implies that the dynamic range for each hologram is roughly equal to the total dynamic range of the medium divided by the number of holograms, or N. As the diffraction efficiency of each hologram is proportional to the squared index modulation, readout signal strength drops off as 1/N2.
The larger the capacity of the device, the smaller the readout signal strength and the signal-to-noise ratio (SNR). In turn, small SNR yields a large raw bit-error rate, which, above a threshold of 103 to 104, cannot be lowered further through any known error-detection schemes. To boost SNR, detector integration times can be increased, though only at the expense of data-transfer rates. High data-capacity systems are therefore most readily demonstrated for small transfer rates, and vice versa.
Demonstration Platforms
A number of holographic storage systems have been designed, built, and tested at Stanford University and Siros Technologies, Inc. of San Jose, Calif. The primary goals have been to gain insight into the underlying physics of the recording mechanisms and the system trade-offs required to achieve specified performance, as well as to integrate new components into working demonstration platforms. The most significant demonstrations (see Table 1) are described in the following sections.
Stanford all-digital system demonstration (1994). The first fully digital holographic data storage system was demonstrated at Stanford in 1994 by storing digital images and a short sound track [5]. The experiment showed that a fully functional system could be implemented with off-the-shelf optical and electronic components. Special emphasis was placed on using digital signal-processing techniques to overcome the noise issues limiting capacity and transfer rates. Another part of the experiment involved introducing a new differential coding scheme to significantly increase capacity while adding extra bits for channel and error-correcting coding.
The noise-tolerant nature of holographic data storage makes it possible to overcome problems associated with noise sources [4]. However, before the Stanford system results were published in 1994, no fully automated digital holographic data storage system had been implemented and no comprehensive study of bit-error-rate performance had been carried out on a system operating at reasonable data transfer rates [5]. Previously, achievable bit-error rates had been extrapolated statistically by sampling a small number of digital information bits from a random sampling of 1,000 holograms stored in a manually controlled system [9, 10].
Commercialization depends mainly on the longevity of other optical storage technologies.
Critical features of the Stanford implementation enabling the system to overcome previous obstacles included a novel differential encoding technique to increase error immunity, error-correction codes, and the distribution of consecutive information bits over multiple data pages. The result was a decrease in the probability of burst errors, which can occur in a single hologram data page. With these techniques, bit-error rates of 106 have been achieved at readout rates of 6.3 10+6 pixels/s.
Figure 1 shows a generic diagram of the system. The storage medium is an iron-doped lithium niobate crystal, which is cut so its C-axis is at a 45-degree angle to the crystal surfaces. Because previously recorded holograms were erased as additional holograms were recorded, an appropriate recording schedule was implemented in the demonstration in order to store a large number of pages with equal diffraction efficiency [6].
If all holograms result in equal diffraction efficiencies, the system can choose a threshold intensity to distinguish “on” pixels from “off” pixels. Note that the words on and off are used here to refer to SLM and charge-coupled device (CCD) pixels, and 0 and 1 to refer to information bits. Incorrect measurement of the time constant can result in a schedule that leads to unequal diffraction efficiencies for each of the stored holograms and an increased bit-error rate. A 10% error in the assumed time constant can result in more than a factor of 2 variation in diffraction efficiency. In addition, laser fluctuations and anomalous writing behavior can result in unequal diffraction efficiencies.
To prevent these problems, the system’s designers devised and used a differential encoding technique, whereby the pixel sequence off-on is written to the SLM to represent a 0, and the pixel sequence on-off is written to represent a 1. In addition to being insensitive to page-to-page intensity fluctuations, the system’s differential encoding technique yields a lower probability of error, assuming a model of additive white noise (independent from pixel to pixel).
The Siros fully automated system (PRISM 1996). Before 1996, demonstrations were under software control with offline data retrieval. Although long recognized by researchers that software procedures can often be implemented in electronic hardware, many practical issues make such implementations far from trivial. At Siros, a fully electronic readout and control system was implemented in 1996—the first demonstration of a fully automated and electronically controlled holographic storage system. The system’s electronic architecture was based on a bus and field-programmable gate array (FPGA) technology (basically reprogrammable logic devices). The optical system architecture was the same as the one in Figure 1. Temperature fixing in lithium niobate was implemented for nonvolatile readout, and a total capacity of 5MB of video data was stored and retrieved at video rates. Channel decoding, Reed-Solomon error correlation coding, bit shuffling, and data warping were all implemented in electronic hardware.
Stanford and Siros 100GB capacity and 1Gb/s readout system demonstration (2000). For thick media, including lithium niobate, 90-degree geometry capacity is usually limited by media dynamic range and noise, rather than by multiplexing crosstalk. For thin media, such as photopolymers, however, this limitation is not the case, and the number of superimposed holograms in a spatial location is determined largely by the number of degrees of freedom available for multiplexing. Angular multiplexing in the transmission geometry does not allow sufficient data storage density, so other multiplexing techniques, such as shift- and peristrophic-multiplexing, are required.
Unfortunately, the displacements necessary for achieving low crosstalk in the shift-multiplexing technique are still too great and therefore require the undesirable stop-and-go architecture for disk motion when using a continuous-wave laser. To avoid this problem, a very sensitive, relatively thick storage medium is required, one suitable for pulsed-laser recording with nsec pulses. Recent developments in the DARPA-funded PRISM consortium have made great strides toward achieving such a medium for recording in the green region of the spectrum using only a few hundred milliwatts of power, largely through the efforts of Polaroid Corp. and its spin-off company Aprilis, Inc. of Boston.
To achieve recording during constant rotation of the disk, a new multiplexing technique had to be developed and implemented by the system’s designers based on a phase-modulated reference beam. In the system currently being tested at Stanford, data densities greater than 70b/µm2 are expected, for a total system capacity of 125GB and transfer rate of 1Gb/s using the electronics described earlier.
The system’s overall architecture is outlined in Figure 3. A 1024×1024-pixel-matched (12.8µm square) 1,000-frames-per-second Kodak digital video camera and an IBM liquid crystal display are used as a detector and page composer, respectively. Recording and readout are done using a pulsed-doubled Nd:YAG laser (532nm wavelength, 500µJoules/ pulse, and pulse width 25nsec). A rotating holographic photopolymer disk is mounted on a precision air-bearing spindle. The angular addressing is done through a precision optical-shaft encoder providing 16,384 electrical reference pulses per rotation. With synchronization electronics, different angular positions on the disk can be addressed with overall positioning repeatability of better than ±0.1µm. Different radial positions are addressed by moving the spindle with mounted disk using a translation stage, which moves the disk and the spindle in the vertical direction. (Figure 2 shows the optical layout for the HDSS write-once read-many, or WORM, demonstration platform.) In the current implementation, a speckle correlation shift multiplexing technique, is used for hologram multiplexing (see Figure 4 for a photo of the experimental holographic disk device).
The system’s holographic channel-decoding electronics are designed for a sustained data transfer rate of 1Gb/s (although the actual data rate into the electronics is 1GB/s, since the camera has 8-bit resolution). Optically, however, the data can be read out at a much faster data rate, since at 1Gb/s, the system has not yet achieved its fundamental signal strength limit.
The system’s designers recently demonstrated that the optical data rate can be at least six times faster. In the demonstration, ~2.5MB of test data was encoded into images and recorded into a photopolymer disk in the form of holograms. The data was read by laser pulses at 6kHz (or 6Gb/s transfer rate, since each image contains 1Mb of channel data). A portion of the data (at a 1Gb/s data rate) was captured by the CCD camera and decoded through holographic channel-decoding electronics. Another sequence of holograms was then read by the camera, decoded by electronics, and verified on the next disk revolution. Figure 5 shows a sample hologram readout at 6Gb/s using a 500µm photopolymer disk. Note the substantial reduction in the image signal strength, due to the lower laser pulse energy at a higher data rate. Still, the raw error rate is within the acceptable range (~0.8% raw error rate), correctable by the Reed-Solomon error-correction code.
Commercialization depends mainly on the longevity of other optical storage technologies.
The recording density for the 6Gb/s demonstration was approximately 5b/µm2. The system’s designers anticipate improved recording density (up to 25b/µm2 and greater) at the current data rate in recording media with improved scatter, shrinkage, and dynamic-range characteristics. A holographic disk 6Gb/s data rate (for a 350MB/s user) represents an important milestone reflecting high parallelism and data-transfer performance for holographic digital information storage.
Outlook
Much progress has been made in recent years on new and improved recording materials for holographic data storage, as well as on improved components, especially large-format CCD, complementary metal-oxide semiconductor (CMOS), and SLM arrays based on ferroelectric liquid-crystal media and microelectronic and mechanical systems. Lasers are smaller and more powerful than ever, and improved algorithms have been developed and implemented. The outlook for holographic data storage has never been brighter.
Commercialization, however, still depends mainly on the longevity of other optical storage technologies. For example, it is highly likely that DVD technology will require some kind of volumetric storage scheme, either in the form of a multilayer or a holographic approach to increase data-storage capacity. But multilayer technologies lack the advantage of the high data-transfer rates available through holographic systems. It is therefore quite possible that holographic data storage could be commercialized as a follow-on to blue-laser DVD technology within 10 years. But holographic data storage will require a new—and costly—manufacturing infrastructure to produce competitively priced products in sufficient quantity and quality to satisfy the storage market, which already has lots of high-performance options.
As a potentially mass-market product, polymer-based systems will most likely be the first to be commercialized, because they leverage the existing manufacturing infrastructure better than the crystal-based, all-solid-state devices. Moreover, systems using polymer materials allow 100 to 1,000 times faster write speeds than those using inorganic photorefractive crystals like lithium niobate.
Figures
Figure 1. The 90-degree optical architecture, including electronic control unit.
Figure 2. Rotating disk architecture using photopolymer media.
Figure 3. Overall layout of the HDSS WORM holographic disk storage system.
Figure 4. Photo of the HDSS ultra-high-speed holographic disk demonstration system.
Figure 5. Sample 1024 x 1024 hologram readout at 6Gb/s (left) and reconstructed at 1Gb/s data rate (right).
Join the Discussion (0)
Become a Member or Sign In to Post a Comment