Sunday, October 30, 2022

seven small microphones

 ── acoustic sensor system
 ── seven small microphones
 ── far─field speech recognition
 ── far─field acoustic sensor system

Anne M. Jacobsen, The pentagon's brain : an uncensored history of DARPA, America's top secret military research agency, 2015 

p.383
Boomerang was DARPA's response to sniper threats, 
It was an acoustic sensor system made up of seven small microphones that attached to a military vehicle, listened for shooter information, and notified soldiers precisely where the fire was coming from, all in less than a second.  The Boomerang system was able to detect shock waves from a sniper's incoming bullets, as well as muzzle blast, then relay that information to soldiers. 
([ far─field acoustic sensor system should get more accurate as you get more sample data; if you have the funding, you should keep working on the program; consider deploying them on freeway and highway to get as much data as you can; ... ])

p.383
a more advanced Boomerang-based technology called ...
  ... was a vehicle-mounted system that fused radar and signal-processing technologies to quickly detect much larger projectiles coming at coalition vehicles, including rocket-propelled grenades, antitank guided missiles, and even direct mortar fire.  A sensor system inside the  ... would be able to identify where the shot came from and relay that information to all other vehicles in the convoy.  

383  “Shot. Two o'clock”:  Raytheon news release, BBN Technologies, Products and Services, Boomerang III.  [p.494]
383  CROSSHAIRS: DARPA, news release, “DARPA's CROSSHAIRS Counter Shooter System”, October 5, 2010.  [p.494]

Anne M. Jacobsen, The pentagon's brain : an uncensored history of DARPA, America's top secret military research agency, 2015 
   ____________________________________
[[  case study 3:  natural speech interface [NSI]: far─field speech recognition, natural voice speaker, Skills Kit, which allowed other companies to build voice-enabled apps ]] 

 • case study:  Amazon echo (4 years)

Brad Stone, Amazon unbound: Jeff Bezos and the invention of a global empire, 2021

 • [ seven omnidirectional microphones ] at the top
a cylinder elongated to create separation between the array of seven omnidirectional microphones at the top and the speakers at the bottom, with some 14 hundred holes punctured in the metal tubing to push out air and sound. 

 • The math suggested they would need to roughly double the scale of their data collection efforts to achieve each successive 3 percent [ 3% ] increase in Alexa's accuracy., p.37, Brad Stone, Amazon unbound: Jeff Bezos and the invention of a global empire, 2021.  

p.23
   The initiative was originally designated inside Lab126 as Project D.  It would come to be known as the Amazon Echo, and by the name of its virtual assistant, Alexa. 

p.24, p.45
Project D, also known as ‘Amazon Alexa’, later named ‘Amazon Echo’ 
 January 4, 2011, first email from Bezos on Project D, p.24
November 6, 2014, product launch, p.45

([
  within a four year time horizon Amazon developed a voice-enable user interface, inside a real─world working product, 
   ─ development far─field speech recognition
   ─ refine speech communication (speak and sound like natural voice)
   ─ backoffice technical development    
   ─ developed the plan to gather enough data for the far─field speech recognition
   ─ the heavy lifting of the speech recognition and other sensory data processing happen at the data center 
   ─ need internetwork [Internet or VPN] connection with the data center
   ─ (( I would be interested to know, if you were to connect an Amazon Echo inside a corporate network, configure the device with a proxy server to communicate to the Amazon server; who what else does the Echo need to connect to work properly; how would a corporate firewall react to this new traffic. ))
   ─ port number for Amazon Echo (Alexa) 
   ─ for example, port number for e─mail is 25, or, is it 24 

 • The math suggested they would need to roughly double the scale of their data collection efforts to achieve each successive 3 percent [3%] increase in Alexa's accuracy., p.37, Brad Stone, Amazon unbound: Jeff Bezos and the invention of a global empire, 2021.  

   ])

p.462  Index
Amazon Alexa, 26─38 
  AMPED and, 43─44
  beta testers
  Bezos's sketch for, 
  bug in,
  as Doppler project, 26─38, 40, 42─47
  Evi and, 34─36
  Fire tablet and, 44
  language─specific version of, 60
  launch of, 44─46
  name of, 32
  Skills Kit, 44─46
  social cue recognition in, 34─35
  speech recognition in, 
  voice of, 27─30
  voice service, 47
  see also Amazon Echo  
  far─field speech recognition, 27─28
  
p.24
Greg Hart
([ in 2010, Greg Hart pointed out to Jeff Bezos that speech recognition technology was good at dictation and search; he did this by showing to Jeff, Google's voice search on an Android phone; ])
speech recognition 2010
Google's voice search, Android phone
technology was finally getting good at dictation and search

p.24
   Hart remembered talking to Bezos about speech recognition one day in late 2010 at Seattle's Blue Moon Burgers.  Over lunch, Hart demonstrated his enthusiasm for Google's voice search on his Android phone by saying, “pizza near me”, and then showing Bezos the list of links to nearby pizza joints that popped up on-screen.  “Jeff was a little skeptical about the use of it on phones, because he thought it might be socially awkward”, Hart remembered.  But they discussed how the technology was finally getting good at dictation and search. 

p.24
January 4, 2011
Greg Hart, 
Ian Freed, device vice president,
Steve Kessel
Amazon's HQ, Day 1 North building
 
p.25
voice-activated cloud computer
speaker, microphone, a mute button
Fiona, the Kindle building

p.26
   One early recruit, Al Lindsay, 
Al Lindsay, who in a previous job had written some of the original code for telco US West's voice-activated directory assistance.  Lindsay spent his first three weeks on the project on vacation at his cottage in Canada, writing a six-page narrative that envisioned how outside developers might program their own voice-enabled apps that could run on the device.

p.26
internal recruit, 
John Thimsen, director of engineering

p.26
  To speed up development
Hart and his crew started looking for startups to acquire.

p.27
Yap, a twenty-person startup based in Charlotte, North Carolina, automatically translated human speech such as voicemails into text, without relying on a secret workforce of human transcribers

p.27
though much of Yap's technology would be discarded, its engineers would help develop the technology to convert what customers said into a computer-readable format.

p.27
industry conference in Florence, Italy
Amazon's newfound interest in speech technology

p.27
Jeff Adams, Yap's VP of research
two-decade veteran of the speech industry

pp.27-28
  after the meeting, Adams delicately told Hart and Lindsay that their goals were unrealistic.  Most experts believed that true “far-field speech recognition” ── comprehending speech from up to 32 feet away, often amid crosstalk and background noise ── was beyond the realm of established computer science, since sound bounces off surfaces like walls and ceilings, producing echoes that confuse computers.
“They basically told me, ‘We don't care. Hire more people. Take as long as it takes. Solve the problem,’” recalled Adams. “They were unflappable.”

p.28
Polish startup Ivona generated computer-synthesized speech that resembled a human voice.
  Ivona was founded ìn 2001 by Lukasz Osowski, a computer science student at the Gdansk university of technology.  Osowski had the notion that so-called “text-to-speech”, or TTS, could read digital texts aloud in natural voice and help the visually impaired in Poland appreciate the written word. 
Michael Kaszczuk
he took recording of an actor's voice and selected fragments of words, called diphones, and then blended or “concatenated” them together in different combinations to approximate natural-sounding words and sentences that the actors might never have uttered. 

p.28
While students, they paid a popular Polish actor named Jacek Labijak to record hours of speech to create a database of sounds.  The result was their first product, Spiker, which quickly became the top-selling computer voice in Poland. 
Over the next few years, it was used widely in subways, elevators, and for robocall campaigns. 

p.29
annual Blizzard Challenge, a competition for the most natural computer voice, organized by Carnegie Mellon university. 

p.29
Gdansk R&D center were put in charge of crafting Doppler's voice.

p.29
the team considered lists of characteristics they wanted in a single personality, such as trustworthiness, empathy, and warmth, and determined those traits were more commonly associated with a female voice. 

pp.29-30
Atlanta area-based voice-over studio, GM Voices, the same outfit that had helped turn recording from a voice actress named Susan Bennett into Apple's agent, Siri. 
p.30
To create synthetic personalities, GM Voices gave female voice actors hundreds of hours of text to read, from entire books to random articles, a mind-numbing process that could stretch on for months. 

p.30
voice artist behind Alexa
professional voice-over community:  Boulder-based singer and voice actress Nina Rolle. 
warm timbre of Alexa's voice
Nina Rolle (Boulder-based singer and voice actress)
 
p.32
Bezos also suggested “Alexa”, an homage to the ancient library of Alexandria, regarded as the capital of knowledge. 

p.32
[ seven omnidirectional microphones ] at the top
a cylinder elongated to create separation between the array of seven omnidirectional microphones at the top and the speakers at the bottom, with some 14 hundred holes punctured in the metal tubing to push out air and sound. 

p.34
   In 2012, inspired by Siri's debut, Tunstall-Pedoe pivoted and introduced the Evi app for the Apple and Android app stores.  Users could ask it questions by typing or speaking.  Instead of searching the web for answer like Siri, or returning a set of links, like Google's voice search, Evi evaluated the question and tried to offer an immediate answer.  The app was downloaded over 250,000 times in its first week and almost crashed the company's servers.  

p.34
   Evi employed a programming technique called knowledge graphs, or large databases of ontologies, which connect concepts and categories in related domains.  If, for example, a user asked Evi, “What is the population of Cleveland?”  the software interpreted that question and knew to turn to an accompanying source of demographic data.  Wired described the technique as a “giant treelike structure” of logical connections to useful facts. 
   Putting Evi's knowledge base inside Alexa helped with the kind of informal but culturally common chitchat called phatic speech.  

p.35
   Integrating Evi's technology helped Alexa respond to factual queries, such as requests to name the planets in the solar system, and it gave the impression that  Alexa was smart.  But was it?  Proponents of another method of natural language understanding, called deep learning, believed that Evi's knowledge graphs wouldn't give Alexa the kind of authentic intelligence that would satisfy Bezos's dream of a versatile assistant that could talk to users and answer any question. 

p.35
  In the deep learning method, machines were fed large amounts of data about how people converse and what responses proved satisfying, and then were programmed to train themselves to predict the best answers. 

p.35
The chief proponent of this approach was an Indian-born engineer named Rohit Prasad.  “He was a critical hire”, said engineering director John Thimsen.  “Much of the success of the project is due to the team he assembled and the research they did on far-field speech recognition.”

p.35
BBN Technologies (later acquired by Raytheon)
Cambridge, Massachusetts-based defense contractor 
At BBN, he [Rohit Prasad] worked on one of the first in-car speech recognition systems and automated directory assistance services for telephone companies. 

p.37
For years, Google also collected speech data from a toll-free directory assistance line, 800-GOOG-411.

p.37
Hart, Prasad, and their team created graphs that projected how Alexa would improve as data collection progressed.  The math suggested they would need to roughly double the scale of their data collection efforts to achieve each successive 3 percent increase in Alexa's accuracy. 

 • The math suggested they would need to roughly double the scale of their data collection efforts to achieve each successive 3 percent increase in Alexa's accuracy., p.37, Brad Stone, Amazon unbound: Jeff Bezos and the invention of a global empire, 2021.  

p.37
“How will we even know when this product is good?”
early 2013
Hart, Prasad, and their team created graphs that projected how Alexa would improve as data collection progressed.  The math suggested they would need to roughly double the scale of their data collection efforts to achieve each successive 3 percent [3%] increase in Alexa's accuracy. 

p.38
“First tell me what would be a magical product, then tell me how to get there.”

p.38
Bezos's technical advisor at the time, Dilip Kumar, 

p.38
they would need thousands of more hours of complex, far-field voice commands.

p.38
Bezos apparently factored in the request to increase the number of speech scientists and did the calculation in his head in a few seconds. 
“Let me get this straight. You are telling me that for your big request to make this product successful, instead of it taking forty years, it will only take us twenty?”

p.42
the resulting program, conceived by Rohit Prasad and speech scientist Janet Slifka over a few days in the spring of 2013
p.42
Rohit Prasad and speech scientist Janet Slifka 
spring of 2013

p.42
answer a question that later vexed speech experts ── 
how did Amazon come out of nowhere to leapfrog Google and Apple in the race to build a speech-enabled virtual assistant?

pp.42-43
internally the program was called AMPED
Amazon contracted with an Australian data collection firm, Appen, and went on the road with Alexa, in disguise. 
p.43
Appen rented homes and apartments, initially in Boston, and then Amazon littered several rooms with all kinds of “decoy” devices:  pedestal microphones, Xbox gaming consoles, televisions, and tablets.  There were also some twenty Alexa devices planted around the rooms at different heights, each shrouded in an acoustic fabric that hid them from view but allowed sound to pass through. 
p.43
Appen then contracted with a temp agency, and a stream of contract workers filtered through the properties, eight hours a day, six days a week, reading scripts from an iPad with canned lines and open-ended request
p.43
  The speakers were turned off, so that Alexa didn't make a peep, but the seven microphones on each device captured everything and streamed the audio to Amazon's servers.  Then another army of workers manually reviewed the recordings and annotated the transcripts, classifying queries that might stump a machine, 
p.43
so that next time, Alexa would know.
p.43
  The Boston test showed promise, so Amazon expanded the program, renting more homes and apartments in Seattle and ten other cities over the next six months to capture the voices and speech patterns of thousands more paid volunteers.  It was a mushroom-cloud explosion of data about device placement, acoustic environments, background noise, regional accents, and all the gloriously random ways a human being might phrase a simple request to hear the weather, for example, or play a Justin 

p.44
by 2012
multimillion-dollar cost.

p.44
By 2014, it has increased its store of speech data by a factor of ten thousand and largely closed the gap with rivals like Apple and Google.

p.47
over the next few months, Amazon would roll out the Alexa Skills Kit, which allowed other companies to build voice-enabled apps for the Echo, and Alexa Voice Service, which let the makers of products like lightbulbs and alarm clocks integrate Alexa into their own devices. 

p.47
a smaller, cheaper version of Echo, the hockey puck-sized Echo Dot, 
a portable version with batteries, the Amazon Tap. 
Echo
Echo dot
Amazon Tap (a portable batteries version of Echo) 
─“”‘’•

p.24
January 4, 2011

p.45
November 6, 2014

Brad Stone, Amazon unbound: Jeff Bezos and the invention of a global empire, 2021
   ____________________________________
··<────────────────────────────────────────────────────────────────────────────>

earthquake research

   Two forces or dual functions (coping with the reality of living)
   • to ‘understand earthquake’ (un-e)
      ■  ... 
   • to ‘manipulate earthquake’ (ma-e) 
      ■ “They also wanted to remind the Bureau that there was evidence that reservoirs actually cause earthquakes.
        “For example, in 1935 the Colorado river was damned, creating the large reservoir called Lake Mead.  In the next ten years, 6,000 minor earthquakes occurrred in what was previously an earthquake-free area.  The underlying rocks──had 10 cubic miles of water set on top of them.”, p.234, Charles Perrow, Normal accidents : living with high-risk technologies, 1999
      ■ “Denver, Colorado, had a mild earthquake in April 1963.  It was a surprise, since there had not been an earthquake in the area in eighty-one years.  Small ones continued for several years; one in 1967 did a little damage to the city.  It turned out that the army caused them. 
        “The army's Rocky Mountain Arsenal is 10 miles from Denver.  It manufactures toxic materials, such as nerve gas, and had to get rid of large amounts of contaminated water.  For a time they just put it into holding ponds, but this led to the death of crops, livestock, and wild life.  So they dug a well, 2 miles deep, and forced the garbage into it under high pressure.  Six weeks later there was the first earthquake, and then an almost daily series of minor tremors.  The source of the earthquake was suspected within a year, but the army denied it could happen and went on pumping.  The water, under high pressure, force the old cracks in very old rocks to grow, and this allowed the rocks, under pressure from tectonic movements, to slide in jerky movement over one another.  Even after the pumping stopped, for a time the pressurized water continued to force open the cracks.  About two years after the army finally stopped the practice, the earthquakes also stopped. 
        “National Center for Earthquake Research 
        “The National Center for Earthquake Research took over part of the field for deliberate experimentation. When they pumped water in, earthquakes occurred; when they pumped the water back out, they stopped.”, p.243, Charles Perrow, Normal accidents : living with high-risk technologies, 1999
   ____________________________________
Sharon Weinberger, The imagineers of war : the untold history of DARPA, the pentagon agency that changed the world, 2017

pp.99-104
p.99
ARPA was assigned nuclear test detection under the code name Vela at the end of 1959 as a counterweight to the CIA's and the air force's secret test detection network.  ARPA got the work, quite simply, because President Eisenhower did not trust his spooks and wanted an assessment that was independent of the CIA and its assets.  
p.99
brought renewed focus and funding to the Vela test detection program. 
By 1961, Vela had three parts:  
Vela Uniform, to detect underground nuclear tests;
Vela Sierra, to detect nuclear explosions in the atmosphere; and
Vela Hotel, which would launch satellites with sensors to detect nuclear tests from space. 

99  Vela had three parts:  The two most significant parts of Vela ended up being Vela Hotel and Vela Uniform.  Vela Sierra, which involved ground-based sensors to detect nuclear tests in space, was eventually folded into Vela Hotel.  Some of the Vela work, it turns out, did not really require any exotic science.  For example, detecting underwater explosions required little new research.  ARPA conducted some underwater tests using conventional explosives under the code name CHASE, short for “cut holes and sink 'em”.  Huff and Sharp, Advanced Research Projects Agency, VII-15.  “The ocean detection system was a nonproblem”, Frosch said.  Frosch, interview with author.  [p.390]

p.99
The academic discipline of seismology, at the time, was a backwater.  Robert Frosch, who was recruited to ARPA to run Vela, recalled going with the director, Robert Sproull, to visit what was supposed to be a start-of-the-art seismic vault, one of the underground bunker-like structures that were used to measure tremors.  The two men came out of the vault in shock, feeling as if they had just emerged from  a time capsule.  The seimologists there were using pen recorders and primitive galvanometers, an analog instrument used to measure electrical current.  

p.99
Vela began to change that with an influx of funding for seismology that was almost unimaginable in scale for most areas of science.  The military's need to distinguish earthquakes from nuclear tests brought seismology “kicking and screaming” into the 20th century, according to Frosch.  At one point, he said, he funded almost “every seismologist in the world, except for two Jesuits at Fordham university” who refused to take money from Pentagon.  

p.100
Large Aperture Seismic Array, or LASA, 
a massive nuclear detection system that comprised 200 “seismic vaults” buried across a 200-kilometer-diameter area in the eastern half of Montana.  For it to work, more than a dozen of these enormous sites would have to be constructed around the world to monitor the Soviet Union.  
There had been smaller arrays, including one in the United Kingdom, 
The air force hated the idea, 

p.100
Bilings, Montana
  What was amazing about LASA, according to Frosch, was the scale of the work, which was completed in just 18 months, a schedule unimaginable for government projects that typically take years, if not decades. 
When ARPA needed to have a center where all the seismic data could be collected and analyzed, the agency ended up renting space in downtown Billings, where data from the array was routed to an IBM computer.  

p.100
  ARPA also began funding the placement of seismograph stations around the world that were operated by scientists. 

pp.100-101
the CIA and the air force, who up to that point had a monopoly on advice to political leaders about what was theoretically possible to monitor a [nuclear explosion] test ban. 

p.101
local scientists only needed to agree to operate them and share the data. 

p.101
a growing tension between secret and open research

p.102
air force and the CIA refused to release data from their network of sensors. 

bête noire - Fr. Anything that is an object of hate or dread; a bugaboo. [< F, black beast]

p.102
The bête noire of the nuclear detection would was Carl Romney, a scientist who worked for the Air Force Technical Application Center, or AFTAC, the agency responsible for nuclear test detection. 

p.102
  Whether deliberate or not, the problem with secret data, as Ruina pointed out, was that “nobody could argue with it; they could just question it.”  The secret data problem came to a head in 1962, when the United States carried out a test called Aardvark, a part of the first series of tests conducted completely underground.  

p.102
Aardvark, a 40-kiloton nuclear device intended for nuclear artillery, produced reliable seismographic data on a nuclear underground explosion, and Romney suddenly realized he had been wrong about a critical national security issue. 

p.102
He had been arguing that it would be difficult to distinguish small underground nuclear tests from earthquakes, which would make verifying a nuclear test ban treaty difficult, if not impossible. 
Now, with the Aardvark data, he knew he had been wrong on a key point.  
During a July 3, 1962, meeting, Romney announced that the new seismic data let him to conclude that distinguishing between tremors and small nuclear tests might not be as difficult as he had previously thought. 

102  Now, with the Aardvark data:  Romney insisted the revisions were the result not of systemic errors but of getting more data.  He had been relying on historical data of large Soviet nuclear tests and extrapolating down to make estimates about the detection of smaller tests, which might be confused with earthquakes.  “The change came about as a result of additional information we got”, Romney insisted.  Romney, interview with the author.  [p.390]

p.102
it would look as if the government were “withholding information that would tend to ease the inspection problem in a nuclear test ban.”

pp.102-103
  Ruina called it an “honest mistake”, but one that would have been avoided if other scientists had been given access to the classified data that Romney jealously guarded.  “This is what can happen when you have one person interpreting data, there's no peer group reviewing it, and there's nobody duplicating the experiment”, the ARPA director wrote in a three-page letter, blaming the mistake on secrecy. 

p.103
Glenn Seaborg, chairman of the Atomic Energy Commission 
played a key role in test ban negotiations. 
“VELA seemed to indicate that the detection capability was better than had been thought by American experts in the period from 1959 to 1961”, Seaborg wrote in his memoir detailing the negotiations.  

“”

p.103
plate tectonics 

104  Following Kennedy death:  As John Dumbrell points out in his book, President Lyndon Johnson and Soviet Communism (Manchester, U.K.: Manchester University Press, 2004), President Johnson approved the largest ever underground nuclear test ── Operation Boxcar, a 1.3-megaton explosion ── in the midst of negotiations over the Nuclear Nonproliferation Treaty.  [p.391]

  (The imagineers of war : the untold story of DARPA, the Pentagon agency that changed the world / by Sharon Weinberger., New York : Alfred A. Knopf, 2017, united states. defense advanced research projects agency──history. | military research──united states. | military art and science──technological innovations──united states. | science and state──united states. | national security──united states──history. | united states──defenses──history., U394.A75 W45 2016 (print) | U394.A75 (ebook) | 355/.040973, 2017, )
   ____________________________________

Anne M. Jacobsen, The pentagon's brain : an uncensored history of DARPA, America's top secret military research agency, 2015 

p.58
If the president was able to ban nuclear weapons tests, the Livermore laboratory would most likely cease to exist. 

p.58
how to put an end to nuclear weapons tests once and for all. 
The centerpiece was test detection. 
ARPA would be in charge of overseeing this new technology, which included seismic and atmospheric sensing, designed to make sure no one cheated on the test ban.  The program was called Vela.  Its technology was highly classified and included three subprograms:  Vela Hotel, Vela Uniform, and Vela Sierra. 

p.59
Vela Hotel (Vela Hotel) 
  devleoped a high-altitude satellite system to detect nuclear explosion from space. 

Vela Uniform (Vela Uniform)
  developed ground sensors able to detect nuclear explosions underground, and produced a program to monitor and read “seismic noise” across the globe. 

Vela Sierra (Vela Sierra)
  monitored potential nuclear explosions in space.  

p.72
Harold Brown
  Here in Geneva, Brown acted as Lawrence's technical advisor.  In order to stop testing, both superpowers had to agree to the creation of a network of 170 seismic detection facilities across Europe, Asia, and North America. This technology effort was being spearheaded by ARPA through its Vela Uniform program.  Technology had advanced to the point where these detection facilities would soon be able to monitor and sense, which close to 100 percent certainty, any aboveground nuclear test over 1 kiloton and, with 90 percent certainty, any underground test over 5 kilotons.  Both sides knew that in some situations it was difficult for detection facilities to tell the difference between an earthquake and an underground test.  These were the kinds of verification details that the experts were working to hash out. 

p.59
ground sensors able to detect nuclear explosions underground, and produced a program to monitor and read “seismic noise” across the globe. (Vela Uniform)

p.240
  DARPA's early work, going back to 1958, had fostered at least six sensor technologies.  
Seismic sensors, developed for the Vela program, sense and record how the earth transmits seismic waves. 
In Vietnam, the seismic sensors could detect heavey truck and troop movement on the Ho Chi Minh Trail, but not bicycles or feet.  
For lighter loads, strain sensors were now being further developed to monitor stress on soil, notably that which results from a person on the move. 
Magnetic sensors detect residual magnetism from objects carried or worn by a person; infrared sensors detect intrusion by beam interruption.  
Electromagnetic sensors generate a radio frequency that also detects intrusion when interrupted.  Acoustic sensors listen for noise.  These were all programs that were now set to take off anew. 

Anne M. Jacobsen, The pentagon's brain : an uncensored history of DARPA, America's top secret military research agency, 2015 
··<────────────────────────────────────────────────────────────────────────────>

Saturday, October 8, 2022

Experimentation matters


Max More
5.0 out of 5 stars This book matters!
Reviewed in the United States 🇺🇸 on November 7, 2003
The way to succeed is to double your failure rate. That comment by Thomas Watson, Sr. is not among the innovators' words of wisdom in Stefan Thomke's densely informative exploration of technologies and processes of experimentation but it perfectly fits the message. Central to Thomke's message in this book is the idea that iterated experimentation through the use of models, prototypes, and computer simulations is the key to learning and innovation. Getting the key to fit in the lock of increased organizational innovation capability, however, takes some jiggling and struggling. Experimentation Matters details the technologies that can transform innovation but place just as much emphasis on the changes that must be made to business processes, organization, culture, incentives, and management. Thomke provides plenty of detailed illustrations of companies wrestling with these issues, and offers six principles revolving to help companies experiment early and often and to organize for rapid iteration.
The first part of the book explains in depth the reasons why experimentation matters for learning and innovation, and how new technologies are affecting the development of both products and services. Thomke shows how the rate of learning is influenced by several factors that affect the process and how it is managed: fidelity, cost, iteration time, capacity, sequential and parallel strategies, signal-to-noise ratio, and type of experiment. Beneath the bewildering diversity of approaches to innovation in different industries, Thomke uncovers six principles that can improve how experimentation occurs: Anticipate and exploit early information through front-loaded innovation processes; Experiment frequently but do not overload your organization; Integrate new and traditional technologies to unlock performance; Organize for rapid experimentation; Fail early and often but avoid "mistakes"; and Manage projects as experiments.

six principles to help companies experiment early and often and to organize for rapid iteration that can improve how experimentation occurs:
  (1) Anticipate and exploit early information through front-loaded innovation processes; 
  (2) Experiment frequently but do not overload your organization; 
  (3) Integrate new and traditional technologies to unlock performance; 
  (4) Organize for rapid experimentation; 
  (5) Fail early and often but avoid "mistakes"; and 
  (6) Manage projects as experiments.

In the final chapter, Thomke looks at how some companies are "shifting the locus of experimentation" to customers as a way to create new value. This approach, sometimes referred to as "co-creation", not only raises productivity but helps fundamentally change the sorts of products and services that can be created. 
Innovation toolkits given to customers need to enable them to iterate through the steps of experimentation, be user-friendly, contain libraries of useful, pretested and debugged components and modules, and they must contain information abut the capabilities and limitations of the production process. In addition to the development of a customer toolkit, Thomke adds four other steps for shifting experimentation and innovation to customers and, very importantly, notes how the creation and capture of value also shifts.

One great strength of Thomke's book is the attention given to the managerial and organizational challenges of implementing new technologies such as computer modeling and simulation and combinatorial and high-throughput testing. 

As other writers have repeatedly emphasized - but many managers have not yet understood - new technologies *must* be introduced only in concert with revised business processes, structures, and management approaches. 

Iterated experimentation helps learning by increasing the number of failures. But if incentives continue to punish failures, the new technologies will be underused or misused. Financial incentives, organizational culture, and management communications will have to change if experimenters are to feel free to fail at the most productive rate. 

  feel free to fail at the most productive rate. 
  The way to succeed is to double your failure rate. 
  (5) Fail early and often but avoid "mistakes"; and 

Thomke illustrates and details the crucial role of organization, process, and management in realizing the potential of experimentation technologies with a range of illuminating cases. He devotes a chapter to these effects in the integrated circuit industry, examines the challenges faced by Bank of America in its bold service experimentation efforts, and shows how managers at Eli Lilly struggled with non-technological aspects of high-powered experimentation in the drug discovery process. 

A study of experimentation in the auto industry, particularly at BMW, suggests several lessons regarding the reality of technology introduction: Technologies are limited by the processes and people that use them; organizational interfaces can get in the way of experimentation; and technologies change faster than behavior. Thomke also shows how managers can look at projects as experiments, reiterating, refining, and learning from them as they proceed through the stages of design, build, run, and analyze.


B.Sudhakar Shenoy
5.0 out of 5 stars Innovation redefined
Reviewed in the United States 🇺🇸 on November 27, 2003
Observation, exploration and experimentation have been the three basic means of learning for scientists. Of these, experimentation calls for the highest levels of external intervention and as a topic by itself has always been of interest to statisticians who have developed powerful techniques to derive maximum information through the least possible number of experiments. Application of these statistical techniques has resulted in substantial reduction in research expenditure, quicker understanding of scientific principles and shorter time to convert ideas into useful products. On the other hand new technologies like simulation, CAD/CAE that harness the advances in computing have completely changed the experimental landscape by providing powerful techniques for rapid and economical experimentation on our desktops and servers. To cite one example discussed in this book, car maker BMW's crash simulation test progressed from 3000 to 700000 finite elements between 1982 to 2002 while simultaneously resulting in reduction of processing time from 3 months to 30 hours. Power of computing enables "front-loaded" innovation - understanding the phenomenon before committing resources into physical manufacturing.

But the lacuna [a space where something has been omited or has come out; gap; hiatus; esp., a missing portion in a manuscript, text, etc.] is that experimentation has never been thought as a separate management discipline cutting across functional silos to bring innovative solutions into the marketplace. Experimentation as a strategic tool that needs management attention and involvement is the core theme of this book.

Management deals with producing results under uncertainty. Uncertainty can be broadly classified under technical, production, market and customer needs. Experimentation should tell us not only what will work, but also what does NOT work. The knowledge so derived should seamlessly flow across the Design-Build-Run-Analyze cycle that cuts across departmental boundaries in large organizations. This is analogous to the concept of ERP in business processes. Though this concepts looks simple, organizational barriers prevent the seamless sharing of information for innovation. Design, manufacturing , marketing and procurement functions fail to optimize on the organizational repository of knowledge that can put winning products into the marketplace. This book is an excellent study on how management can use experimentation as a unique strategy within and beyond organizational boundaries. Case studies are quite detailed and well illustrated.
Read this book. It is worth experimenting.

Chih-Tang Sah

  Chih-Tang Sah https://en.wikipedia.org/wiki/Chih-Tang_Sah Evolution of the MOS transistor –– from conception of VLSI by Chih-tang Sah, fel...