A Better Calibration Tool
Metrology instruments that produce images of a surface or other
topographical objects don’t reproduce the shape of the surface with
100% fidelity. Understanding the entire range of spatial frequency
response of these instruments ultimately defines their intrinsic usefulness and drives development of new technology. Over the past
several years, a team from Lawrence Berkeley National Laboratory,
Argonne National Laboratory and Brookhaven National Laboratory has developed new technology—the Binary Pseudo-Random
Calibration Tool—that promises to revolutionize how metrology
instrumentation is characterized. By fabricating, and then imaging, a
binary pseudo-random array, the entire modulation transfer function
can be determined. The team developed the algorithms and fabrication techniques to expand this concept across all practical spatial
frequencies, from visible light to the atomic scale, and have proven
the success of this technique with white light interferometers, SEMs,
TEMs and x-ray microscopes. Recently, the labs joined forces with
ABeam Technologies Inc. to deploy the technique for characterization of nanometer-resolution electron-beam lithography tools.
◗ Argonne National Laboratory, www.anl.gov
Chip Off the Brain
Many applications would benefit from computers that can emulate
the brain’s abilities for perception, recognition, association and decision. Despite decades of effort, it has proven difficult to program
computers of traditional design to perform these tasks with the same
efficiency, small volume and low power consumption as the brain.
IBM’s TrueNorth neurosynaptic chip embodies a new architecture
inspired by the brain and is designed to perform similar tasks of perception, recognition, association and decision efficiently. Each chip
contains 1 million neurons and 256 million synapses and are designed
so they homogeneously scale to build larger systems—yet, running in
real time the chip consumes only 70 m W of power. The chip is literally a supercomputer the size of a postage stamp, consuming the power
of a hearing aid battery.
◗ IBM, www.ibm.com
Scientists analyze data visually, often turning data into images or even
movies. Statistical analysis occurs over the entire data set. Today’s
simulations on high-performance computers make visualizing data
untenable because of the resources required to move, search and analyze
all the data at once. A solution to this complicated problem lies in identifying, retrieving and analyzing smaller subsets of data with the multi-dimensional whole world. To do this, it becomes necessary to manage
these multiple dimensions of simulation data. In collaboration with
EMC, Los Alamos National Laboratory has developed MDHIM (
Multidimensional Hashed Indexed Middleware), a software framework
that enables applications to take advantage of the mechanisms provided
by a parallel key/value store (storing data in global, multidimensional
order and subsetting of massive data in multiple dimensions). A key/
value store is a database where entries are referenced by a single key.
Key/value pairs can be inserted into the database for later retrieval. Distributed key/value stores take the concept of the key/value store and add
the ability for multiple servers to serve portions of the data (key space).
This load balances accesses to the data, which improves performance.
◗ Los Alamos National Laboratory, www.lanl.gov
Mitigating Malicious Threats
In light of high-profile network compromises, software tools must
help mitigate threats from malicious sources. MIT Lincoln Laboratory’s Lincoln Adaptable Real-time Information Assurance Test-bed
(LARIAT) fills a very important gap. LARIAT emulates active, complex networks, including their real-time interactions to the Internet.
This is accomplished by creating a closed, disconnected network,
augmented with realistic virtual users that emulate real users and
interactions. The virtual network can be used to perform a variety
of activities including assessments or experimentation with different kinds of systems (intrusion detection systems), and for red-blue
(adversary-defender) exercises for the purpose of training of cyber
teams and evaluating system protection schemes.
◗ MIT Lincoln Laboratory, www.ll.mit.edu
Getting to the CORE of HPC
Oak Ridge National Laboratory’s CORE-Direct is an application
acceleration and scaling technology available with the InfiniBand
HCA ecosystem for HPC, big data and data center applications.
CORE-Direct’s software and hardware is available by Mellanox.
CORE-Direct technology accelerates the main determinant of performance and scalability in parallel applications, the group data
exchange operations. To achieve this, it adds software and hardware
capabilities to offload and execute the data exchange operations
on the network, abstract the memory hierarchies on the node and
provide a powerful abstraction to be used by applications, offering a
novel and comprehensive solution. The testament to this is the wide
and successful adoption of the technology—more than 28% of supercomputers on the Top 500 list of world’s fastest supercomputers use
◗ Oak Ridge National Laboratory, www.ornl.gov
Intelligent Web Crawler
The Internet offers valuable publicly available information in unprecedented quantities, as long as one is patient enough to search methodically and intelligently. In competitive markets, business organizations
can’t afford to spend time gathering such information before making
a time-sensitive business decision. Oak Ridge National Laboratory’s
iCRAWL is a powerful cyber-informatics tool available to businesses
and organizations that rapidly improves digital marketing intelligence. The software enables digital marketing analysts in all industry
and government sectors to discover and collect the digital footprints
of whole-Internet-wide consumers, competitors, media and other
stakeholders related to any specific project. The intelligent tool is also
capable of adaptively learning from the outcomes and self-monitored
performance data regarding its autonomous online information discovering and gathering operations.
◗ Oak Ridge National Laboratory, www.ornl.gov
Increasing Data Value
Ryft’s Ryft ONE significantly increases the value of a company’s data
by speeding and simplifying the data analytics pipeline through a combination of purpose-built hardware and open interface software. The
Ryft ONE simultaneously analyzes up to 48 TB of batch and streaming
data at 10 Gb/sec or faster. It leverages over a decade of working with
the world’s biggest users of complex big data to overcome the challenges of legacy x86 computing architectures—including massive I/O,