During the Second World War, the German military developed what were at the time some very sophisticated technologies, including the
V-2 rockets it used to rain destruction on London. Yet the V-2, along with much other German military hardware, depended on an obscure and seemingly antiquated component you’ve probably never heard of, something called the magnetic amplifier or mag amp.
In the United States, mag amps had long been considered obsolete—“too slow, cumbersome, and inefficient to be taken seriously,” according to one source. So U.S. military-electronics experts of that era were baffled by the extensive German use of this device, which they first learned about from interrogating German prisoners of war. What did the Third Reich’s engineers know that had eluded the Americans?
After the war, U.S. intelligence officers scoured Germany for useful scientific and technical information. Four hundred experts sifted through billions of pages of documents and shipped 3.5 million microfilmed pages back to the United States, along with almost 200 tonnes of German industrial equipment. Among this mass of information and equipment was the secret of Germany’s magnetic amplifiers: metal alloys that made these devices compact, efficient, and reliable.
U.S. engineers were soon able to reproduce those alloys. As a result, the 1950s and ’60s saw a renaissance for magnetic amplifiers, during which they were used extensively in the military, aerospace, and other industries. They even appeared in some early solid-state digital computers before giving way entirely to transistors. Nowadays, that history is all but forgotten. So here I’ll offer the little-known story of the mag amp.
An amplifier, by
definition, is a device that allows a small signal to control a larger one. An old-fashioned triode vacuum tube does that using a voltage applied to its grid electrode. A modern field-effect transistor does it using a voltage applied to its gate. The mag amp exercises control electromagnetically.
Magnetic amplifiers were used for a variety of applications, including in the infamous V-2 rockets [top] that the Germany military employed during the Second World War and in the Magstec computer [middle], completed in 1956. The British Elliot 803 computer of 1961 [bottom] used related core-transistor logic.
From top: Fox photos/Getty Images; Remington Rand Univac; Smith Archive/Alamy
To understand how it works, first consider a simple inductor, say, a wire coiled around an iron rod. Such an inductor will tend to block the flow of alternating current through the wire. That’s because when current flows, the coil creates an alternating magnetic field, concentrated in the iron rod. And that varying magnetic field induces voltages in the wire that act to oppose the alternating current that created the field in the first place.
If such an inductor carries a lot of current, the rod can reach a state called saturation, whereby the iron cannot become any more magnetized than it already is. When that happens, current passes through the coil virtually unimpeded. Saturation is usually undesirable, but the mag amp exploits this effect.
Physically, a magnetic amplifier is built around a metallic core of material that can easily be saturated, typically a ring or square loop with a wire wrapped around it. A second wire also wrapped around the core forms a control winding. The control winding includes many turns of wire, so by passing a relatively small direct current through it, the iron core can be forced into or out of saturation.
The mag amp thus behaves like a switch: When saturated, it lets the AC current in its main winding pass unimpeded; when unsaturated, it blocks that current. Amplification occurs because a relatively small DC control current can modify a much larger AC load current.
The history of magnetic amplifiers starts in the United States with some patents filed in 1901. By 1916, large magnetic amplifiers were being used for transatlantic radio telephony, carried out with an invention called an
Alexanderson alternator, which produced a high-power, high-frequency alternating current for the radio transmitter. A magnetic amplifier modulated the output of the transmitter according to the strength of the voice signal to be transmitted.
One Navy training manual of 1951 explained magnetic amplifiers in detail—although with a defensive attitude about their history.
In the 1920s, improvements in vacuum tubes made this combination of Alexanderson alternator and magnetic amplifier obsolete. This left the magnetic amplifier to play only minor roles, such as for light dimmers in theaters.
Germany’s later successes with magnetic amplifiers hinged largely on the development of advanced magnetic alloys. A magnetic amplifier built from these materials switched sharply between the on and off states, providing greater control and efficiency. These materials were, however, exquisitely sensitive to impurities, variations in crystal size and orientation, and even mechanical stress. So they required an exacting manufacturing process.
The best-performing German material, developed in 1943, was called Permenorm 5000-Z. It was an extremely pure fifty/fifty nickel-iron alloy, melted under a partial vacuum. The metal was then cold-rolled as thin as paper and wound around a nonmagnetic form. The result resembled a roll of tape, with thin Permenorm metal making up the tape. After winding, the module was annealed in hydrogen at 1,100 °C for 2 hours and then rapidly cooled. This process oriented the metal crystals so that they behaved like one large crystal with uniform properties. Only after this was done were wires wrapped around the core.
By 1948, scientists at the U.S.
Naval Ordnance Laboratory, in Maryland, had figured out how to manufacture this alloy, which was soon marketed by an outfit called Arnold Engineering Co. under the name Deltamax. The arrival of this magnetic material in the United States led to renewed enthusiasm for magnetic amplifiers, which tolerated extreme conditions and didn’t burn out like vacuum tubes. Mag amps thus found many applications in demanding environments, especially military, space, and industrial control.
During the 1950s, the U.S. military was using magnetic amplifiers in automatic pilots, fire-control apparatus, servo systems, radar and sonar equipment, the
RIM-2 Terrier surface-to-air missile, and many other roles. One Navy training manual of 1951 explained magnetic amplifiers in detail—although with a defensive attitude about their history: “Many engineers are under the impression that the Germans invented the magnetic amplifier; actually it is an American invention. The Germans simply took our comparatively crude device, improved the efficiency and response time, reduced weight and bulk, broadened its field of application, and handed it back to us.”
The U.S. space program also made extensive use of magnetic amplifiers because of their reliability. For example, the
Redstone rocket, which launched Alan Shepard into space in 1961, used magnetic amplifiers. In the Apollo missions to the moon during the 1960s and ’70s, magnetic amplifiers controlled power supplies and fan blowers. Satellites of that era used magnetic amplifiers for signal conditioning, for current sensing and limiting, and for telemetry. Even the space shuttle used magnetic amplifiers to dim its fluorescent lights.
Magnetic amplifiers were also used in Redstone rockets, like the one shown here behind astronauts John Glenn, Virgil Grissom, and Alan Shepard.Universal Images Group/Getty Images
Magnetic amplifiers also found heavy use in industrial control and automation, with many products containing them being marketed under such brand names as General Electric’s
Amplistat, CGS Laboratories’ Increductor, Westinghouse’s Cypak (cybernetic package), and Librascope’s Unidec (universal decision element).
The magnetic materials developed in Germany during the Second World War had their largest postwar impact of all, though, on the computer industry. In the late 1940s, researchers immediately recognized the ability of the new magnetic materials to store data. A circular magnetic core could be magnetized counterclockwise or clockwise, storing a 0 or a 1. Having what’s known as a rectangular hysteresis loop ensured that the material would stay solidly magnetized in one of these states after power was removed.
Researchers soon constructed what was called core memory from dense grids of magnetic cores. And these technologists soon switched from using wound-metal cores to cores made from ferrite, a ceramic material containing iron oxide. By the mid-1960s, ferrite cores were stamped out by the billions as manufacturing costs dropped to a fraction of a cent per core.
But core memory is not the only place where magnetic materials had an influence on early digital computers. The first generation of those machines, starting in the 1940s, computed using vacuum tubes. These were replaced in the late 1950s with a second generation based on transistors, followed by third-generation computers built from integrated circuits.
Transistors weren’t an obvious winner for early computers, and many other alternatives were developed, including magnetic amplifiers.
But technological progress in computing wasn’t, in fact, this linear. Early transistors weren’t an obvious winner, and many other alternatives were developed. Magnetic amplifiers were one of several largely forgotten computing technologies that fell between the generations.
That’s because researchers in the early 1950s realized that magnetic cores could not only hold data but also perform logic functions. By putting multiple windings around a core, inputs could be combined. A winding in the opposite direction could inhibit other inputs, for example. Complex logic circuits could be implemented by connecting such cores together in various arrangements.
How Magnetic Amplifiers Amplify
The magnetic amplifier exploits the fact that the presence of magnetizable material [tan] in the core of an induction coil increases its impedance. Reducing the influence of that magnetic material by physically withdrawing it from a coil would reduce its impedance, allowing more power to flow to an AC load.
The influence of a magnetizable material, here taking the form of a toroidal core [tan], can be changed by applying a DC bias using a second coil [left side of toroid]. Applying a DC bias current sufficient to force the material into a condition called saturation—a state in which it cannot become more magnetized—is functionally equivalent to removing the material from the coil, which allows more power to flow to the AC load.
A more realistic circuit would include two counter-wound AC coils, to avoid inducing currents in the control winding. It would also include diodes, shown here in a bridge configuration, allowing the circuit to control a DC load. Feedback coils [not shown] can be used to increase amplification.David Schneider
In 1956, the
Sperry Rand Co. developed a high-speed magnetic amplifier called the Ferractor, capable of operating at several megahertz. Each Ferractor was built by winding a dozen wraps of one-eighth-mil (about 3 micrometers) Permalloy tape around a 0.1-inch (2.5-mm) nonmagnetic stainless-steel bobbin.
The Ferractor’s performance was due to the remarkable thinness of this tape in combination with the tiny dimensions of the bobbin. Sperry Rand used the Ferractor in a military computer called the Univac Magnetic Computer, also known as the Air Force Cambridge Research Center (AFCRC) computer. This machine contained 1,500 Ferractors and 9,000 germanium diodes, as well as a few transistors and vacuum tubes.
Sperry Rand later created business computers based on the AFCRC computer: the
Univac Solid State (known in Europe as the Univac Calculating Tabulator) followed by the less expensive STEP (Simple Transition Electronic Processing) computer. Although the Univac Solid State didn't completely live up to its name—its processor used 20 vacuum tubes—it was moderately popular, with hundreds sold.
Another division of Sperry Rand built a computer called
Bogart to help with codebreaking at the U.S. National Security Agency. Fans of Casablanca and Key Largo will be disappointed to learn that this computer was named after the well-known New York Sun editor John Bogart. This relatively small computer earned that name because it edited cryptographic data before it was processed by the NSA’s larger computers.
Five Bogart computers were delivered to the NSA between 1957 and 1959. They employed a novel magnetic-amplifier circuit designed by
Seymour Cray, who later created the famous Cray supercomputers. Reportedly, out of his dozens of patents, Cray was most proud of his magnetic-amplifier design.
Computers based on magnetic amplifiers didn’t always work out so well, though. For example, in the early 1950s, Swedish billionaire industrialist
Axel Wenner-Gren created a line of vacuum-tube computers, called the ALWAC (Axel L. Wenner-Gren Automatic Computer). In 1956, he told the U.S. Federal Reserve Board that he could deliver a magnetic-amplifier version, the ALWAC 800, in 15 months. After the Federal Reserve Board paid US $231,800, development of the computer ran into engineering difficulties, and the project ended in total failure.
Advances in transistors during the 1950s led, of course, to the decline of computers using magnetic amplifiers. But for a time, it wasn’t clear which technology was superior. In the mid-1950s, for example, Sperry Rand was debating between magnetic amplifiers and transistors for the
Athena, a 24-bit computer to control the Titan nuclear missile. Cray built two equivalent computers to compare the technologies head-to-head: the Magstec (magnetic switch test computer) used magnetic amplifiers, while the Transtec (transistor test computer) used transistors. Although the Magstec performed slightly better, it was becoming clear that transistors were the wave of the future. So Sperry Rand built the Univac Athena computer from transistors, relegating mag amps to minor functions inside the computer’s power supply.
In Europe, too, the transistor was battling it out with the magnetic amplifier. For example, engineers at Ferranti, in the United Kingdom, developed magnetic-amplifier circuits for their computers. But they found that transistors provided more reliable amplification, so they replaced the magnetic amplifier with a transformer in conjunction with a transistor. They called this circuit the Neuron because it produced an output if the inputs exceeded a threshold, analogous to a biological neuron. The Neuron became the heart of Ferranti’s Sirius and Orion business computers.
Another example is the Polish EMAL-2 computer of 1958, which used magnetic-core logic along with 100 vacuum tubes. This 34-bit computer was Poland’s first truly productive digital computer. It was compact but slow, performing only 150 or so operations per second.
And in the Soviet Union, the 15-bit LEM-1 computer from 1954 used 3,000 ferrite logical elements (along with 16,000 selenium diodes). It could perform 1,200 additions per second.
In France, magnetic amplifiers were used in the
CAB 500 (Calculatrice Arithmétique Binaire 500), sold in 1960 for scientific and technical use by a company called Société d’Electronique et d’Automatisme (SEA). This 32-bit desk-size computer used a magnetic logic element called the Symmag, along with transistors and a vacuum-tube power supply. As well as being programmed in Fortran, Algol, or SEA’s own language, PAF (Programmation Automatique des Formules), the CAB 500 could be used as a desk calculator.
Some computers of this era used multiaperture cores with complex shapes to implement logic functions. In 1959, engineers at Bell Laboratories developed a ladder-shaped magnetic element called
the Laddic, which implemented logic functions by sending signals around different “rungs.” This device was later used in some nuclear-reactor safety systems.
Another approach along these lines was something called the
Biax logic element—a ferrite cube with holes along two axes. Another was dubbed the transfluxor, which had two circular openings. Around 1961, engineers at the Stanford Research Institute built the all-magnetic logic computer for the U.S. Air Force using such multi-aperture magnetic devices. Doug Engelbart, who famously went on to invent the mouse and much of the modern computer user interface, was a key engineer on this computer.
Some computers of the time used transistors in combination with magnetic cores. The idea was to minimize the number of then-expensive transistors. This approach, called core transistor logic (CTL), was used in the British
Elliott 803 computer, a small system introduced in 1959 with an unusual 39-bit word length. The Burroughs D210 magnetic computer of 1960, a compact computer of just 35 pounds (about 16 kilograms) designed for aerospace applications, also used core-transistor logic.
This board from a 1966 IBM System/360 [top] shows some of the machine’s magnetic-core memory, which made use of small ferrite rings through which wires were strung [bottom].Top: Maximilian Schönherr/picture-alliance/dpa/AP; Bottom: Sheila Terry/Rutherford Appleton Laboratory/Science Source
Core-transistor logic was particularly popular for space applications. A company called Di/An Controls produced a line of logic circuits and claimed that “most space vehicles are packed with them.” The company’s Pico-Bit was a competing core-transistor-logic product, advertised in 1964 as “Your best bit in space.” Early prototypes of NASA’s
Apollo Guidance Computer were built with core transistor logic, but in 1962 the designers at MIT made a risky switch to integrated circuits.
Even some “fully transistorized” computers made use of magnetic amplifiers here and there. The MIT
TX-2 of 1958 used them to control its tape-drive motors, while the IBM 7090, introduced in 1959, and the popular IBM System/360 mainframes, introduced in 1964, used magnetic amplifiers to regulate their power supplies. Control Data Corp.’s 160 minicomputer of 1960 used a magnetic amplifier in its console typewriter. Magnetic amplifiers were too slow for the logic circuits in the Univac LARC supercomputer of 1960, but they were used to drive its core memory.
In the 1950s, engineers in the U.S. Navy had called magnetic amplifiers “a rising star” and one of “the marvels of postwar electronics.” As late as 1957, more than 400 engineers attended a conference on magnetic amplifiers. But interest in these devices steadily declined during the 1960s when transistors and other semiconductors took over.
Yet long after everyone figured that these devices were destined for the dust heap of history, mag amps found a new application. In the mid-1990s, the
ATX standard for personal computers required a carefully regulated 3.3-volt power supply. It turned out that magnetic amplifiers were an inexpensive yet efficient way to control this voltage, making the mag amp a key part of most PC power supplies. As before, this revival of magnetic amplifiers didn’t last: DC-DC regulators have largely replaced magnetic amplifiers in modern power supplies.
All in all, the history of magnetic amplifiers spans about a century, with them becoming popular and then dying out multiple times. You’d be hard pressed to find a mag amp in electronic hardware produced today, but maybe some new application—perhaps for quantum computing or wind turbines or electric vehicles—will breathe life into them yet again.
From Your Site Articles
Introducing the Vacuum Transistor: A Device Made of Nothing - IEEE ... ›
Dudley Buck's Forgotten Cryotron Computer - IEEE Spectrum ›
The 11 Greatest Vacuum Tubes You've Never Heard Of - IEEE ... ›
Related Articles Around the Web
Homemade Magnetic Amplifiers. ›
The Magnetic Amplifier | Nuts & Volts Magazine ›
Magnetic amplifier - Wikipedia ›
When Neil Armstrong uttered one of the most famous sentences in human history, he did so via a microphone in his helmet and a 3-kilogram VHF-band radio in his backpack. The radio linked to a rig in the lunar lander, which launched microwave signals on a 325,000-kilometer journey to Earth.That radio was a tech marvel. In a package just 35 by 15 by 3.2 centimeters, its designers fit two AM receivers, two AM transmitters, either an FM receiver or an FM transmitter, and also a telemetry system that transmitted spacesuit status and physiological data about the astronaut. While those specs might not seem so dazzling today, kindly remember that this radio was designed (by RCA) in the mid-1960s. The whole thing was done without integrated circuits, which were available in the mid-1960s and used extensively elsewhere in the Apollo program but were still relatively uncommon and expensive.
The astronauts’ backpack radios were just a small piece of a sprawling communications infrastructure assembled by NASA in the 1960s. For the Apollo missions, during the moon walks, the heart of the communications system was the rig in the lunar lander, known as the Lunar Module Communications System. It communicated not only with the astronauts’ radios but also had microwave links to the orbiting command module and to Earth, through a globe-spanning network of more than 30 dish antennas called the Manned Space Flight Network.A memorable feature of the Apollo communication system was the S-band erectable antenna, which was connected to the lunar module’s radio system. Stowed as a cylinder 25 by 100 cm, it was unfurled on the moon into a 3-meter-diameter parabolic dish covered with a very fine, flexible, gold-plated mesh. The erectable antenna had a transmit gain of 34 decibels, about 12 dB better than the “steerable” antenna mounted on the lander. The higher gain was needed to accommodate color-TV signals, along with the voice and data channels. Unfortunately, the first time the antenna was used, there wasn’t much video to broadcast. As he was setting up the camera, astronaut Alan Bean accidentally aimed it at the sun and burned out its image tube.“Here is the TV,” he said. “And it’s pointing toward the sun. That’s bad.” After the mishap, NASA equipped future Apollo TV cameras with a lens cap.One of the more poignant stories of Apollo’s communications involved a radio amateur in Kentucky: Larry Baysinger, W4EJA. By day, Baysinger, who had an interest in radio astronomy, was the station engineer at WHAS AM in Louisville. On the evening of 20 July 1969, Basinger managed to pick up not the powerful S-band signal from the lunar lander but rather the weak VHF signals from Neil Armstrong’s backpack radio on the moon. For 35 minutes he listened to the astronauts’ conversation, and even heard them being congratulated by President Richard Nixon. The feat is all the more remarkable considering Baysinger’s rig—a rebuilt 20-year-old Army surplus tank radio and a steerable antenna he built out of aluminum tubing and chicken wire.With dozens of missions planned for the next decade, Jet Propulsion Laboratory has partnered with the Italian aerospace company Argotec to design a satellite-based lunar networkNow Earth-to-moon communications are poised for a new era. With dozens of missions planned for the next decade, the Jet Propulsion Laboratory has partnered with the Italian aerospace company Argotec to design a satellite-based lunar network that would provide coverage to most of the moon at any given time. The plan calls for 24 satellites to move in four highly elliptical orbits, relaying signals between the lunar surface and Earth. The network wouldn’t be very fast—it would deliver tens of megabits per second, which is less than a decent fiber-to-the-home hookup.But the Argotec-JPL concept is just one of several budding initiatives to design future lunar communications infrastructure, including proposals to serve future lunar residents. These explorers would need enormously powerful data links to conduct experiments, control remote equipment, receive and issue warnings about dangerous space weather, rescue stranded surface travelers, and even combat homesickness. The largest project to return humans to the moon, NASA’s Artemis, has already spawned several proposals for such lunar networks. NASA itself has developed an architecture it calls LunaNet, which it recently shared with industrial and government partners. And the Japan Aerospace Exploration Agency recently awarded separate contracts to ArkEdge Space and Warpspace Co. to perform studies for robust and technologically advanced lunar networks.It’s not too much of a stretch to envision these future moon dwellers gaping at the landing sites of the Apollo missions. There, amid the lander descent stages, nail clippers, US $2 bills, and vomit bags, they’ll see the S-band erectable antennas, television cameras, and lunar rovers. With their 21st-century smartphones, they might even take selfies alongside some of the most remarkable communications gear of the 20th.This article appears in the April 2022 print issue as “One Small Step for Radio.”From Your Site Articles
Lunar Pioneers Will Use Lasers to Phone Home - IEEE Spectrum ›
NASA's Lunar Space Station Is a Great/Terrible Idea - IEEE Spectrum ›
Project Moon Base - IEEE Spectrum ›
Related Articles Around the Web
KSAT invests in dedicated lunar communications network ... ›
Another startup joins race to provide high-speed lunar ... ›
Engineering The Communications System For Apollo 11 - General ... ›
Keep Reading ↓
Show less
Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.ICRA 2022: 23–27 May 2022, PhiladelphiaERF 2022: 28–30 June 2022, Rotterdam, Netherlands CLAWAR 2022: 12–14 September 2022, Azores, PortugalEnjoy today’s videos!
I’m nearly convinced that all robots should be quadrupeds and humanoids and have wheels.Also, I’m sorry, but looking at the picture at the top of this article I now CANNOT UNSEE the bottom half of the robot as an angry red face gripping those wheel limbs in its mouth.[ Swiss-Mile ]OTTO Lifter drives nimbly in crowded and dynamic environments and improves safety in warehouses and facilities. With advanced safety sensors and class-leading autonomous driving capabilities, OTTO Lifter works alongside people, other vehicles, and existing infrastructure; providing businesses a safer material handling solution for as low as $9 per hour.I have mixed feelings about this, because I’ve worked in a factory before, and getting to drive a forklift was my only source of joy.[ OTTO ]When you create a humanoid robot that can punch through solid objects and then give it a black mustache and goatee, you are just asking for trouble.[ DFKI ]Welcome to feeling bad about your level of flexibility, with Digit.[ Agility ]I am only slightly disappointed that the new “ex-proof” ANYmal is not actually explosion-proof, but rather is unlikely to cause other things to explode.Although I suppose this means that technically any other version of ANYmal is therefore much more likely to cause explosions, right?[ ANYbotics ]There is the compilation of robot failure videos I recorded for the past year when I worked on the research projects related with the legged robots. Legged robots are awesome, but the key to success is coping with failure. Because of the hard work by so many researchers in the community, we could see legged robots performing these wonderful agile maneuvers.Thanks to Steven Hong for recording and sharing these videos, and I hope you’re inspired to share some of your own failures. With the same kind of great commentary, of course.[ ROAHM Lab ]The thing to know about this research is that we now have a path toward getting a thruster-assisted 40 ton Gundam robot to run.[ JSK ]What makes me most uncomfortable about this video is the sound the eyelids make.[ Child-type Android Project ]The OpenCV AI Game Show is a thing that exists, and here’s a segment.[ OpenCV ]A long-horizon dexterous robot manipulation task of deformable objects, such as banana peeling, is problematic because of difficulties in object modeling and a lack of knowledge about stable and dexterous manipulation skills. This paper presents a goal-conditioned dual-action deep imitation learning (DIL) which can learn dexterous manipulation skills using human demonstration data.This is very impressive, but a simpler solution is to just outlaw bananas because they’re disgusting.[ Paper ]Presenting the arch-nemesis of bottle scramblers everywhere, the bottle unscrambler.[ B&R Automation ]How does the Waymo Driver safely handle interactions with cyclists in dense urban environments like San Francisco? Jack, a product manager at Waymo, shares a couple interactions and the personal connection he has with getting it right.[ Waymo ]On Episode 11 of Season 2 of the Robot Brains podcast, we’re joined by entrepreneur and philanthropist, Jared Schrieber. He envisions a world where there are as many elementary and high school robotics teams as there are basketball or football teams. He founded Revolution Robotics; a non-profit dedicated to making robotics hardware and software kits accessible to all communities, to make his vision into a reality. [ Robot Brains ]Thanks, Alice!A 2021 ICRA keynote from MIT’s Kevin Chen, on “Agile and Robust Micro-Aerial-Robots Powered by Soft Artificial Muscles.”[ MIT ]This GRASP SFI is from Shuran Song at Columbia University, on “The Reasonable Effectiveness of Dynamic Manipulation for Deformable Objects.”From unfurling a blanket to swinging a rope; high-velocity dynamic actions play a crucial role in how people interact with deformable objects. In this talk, I will discuss how we can get robots to learn to dynamically manipulate deformable objects, where we embrace high-velocity dynamics rather than avoid them (e.g., exclusively using slow pick and place actions). With robots that can fling, swing, or blow with air, our experiments show that these interactions are surprisingly effective for many classically hard manipulation problems and enable new robot capabilities.[ UPenn ]
Keep Reading ↓
Show less
An Orthogonal Code Readout (OCR) electronically expands the dynamic range of CMOS active pixels. When OCR is implemented, the dynamic range of a pixel is not limited by per-pixel capacitance. When employed as a thermal array readout, OCR simplifies thermal detector array fabrication by eliminating the need for large electron well capacitors.