The Microphone

The History of Computing - Een podcast door Charles Edge

Categorieën:

Welcome to the History of Computing Podcast, where we explore the history of information technology. Because understanding the past prepares us to innovate of the future! Todays episode is is on the microphone. Now you might say “wait, that’s not a computer-thing. But given that every computer made in the past decade has one, including your phone, I would beg to differ. Also, every time I record one of these episodes, I seem to get a little better with wielding the instruments, which has led me to spend way more time than is probably appropriate learning about them. So what exactly is a microphone? Well, it’s a simple device that converts mechanical waves of energy into electrical waves of energy. Microphones have a diaphragm, much as we humans do and that diaphragm mirrors the sound waves it picks up. So where did these microphones come from? Well, Robert Hooke got the credit for hooking a string to a cup in 1665 and suddenly humans could push sound over distances. Then in 1827 Charles Wheatstone, who invented the telegraph put the word microphone into our vernacular. 1861 rolls around and Johan Philipp Reis build the Reis telephone, which electrified the microphone using a metallic strip that was attached to a vibrating membrane. When a little current was passed through it, it reproduced sound far away. Think of this as more of using electricity to amplify the effects of the string on the cup. But critically, sound had been turned into signal. In 1876, Emile Berliner built a modern microphone while working on the gramophone. He was working with Thomas Edison at the time and would go on to sell the patent for the Microphone to The Bell Telephone Company. Now, Alexander Graham Bell had designed a telephone transmitter in 1876 but ended up in a patent dispute with David Edward Hughes. And as he did with many a great idea, Thomas Edison made the first practical microphone in 1886. This was a carbon microphone that would go on to be used for almost a hundred years. It could produce sound but it kinda’ sucked for music. It was used in the first radio broadcast in New York in 1910. The name comes from the cranes of carbon that are packed between two metal plates. Edison would end up introducing the diaphragm and the carbon button microphone would become the standard. That microphone though, often still had a built0-in amp, strengthening the voltage that was the signal sound had been converted to. 1915 rolls around and we get the vacuum tube amplifier. And in 1916, E.C. Wente of Bell Laboratories designed the condenser microphone. This still used two plates, but each had an electrical charge and when the sound vibrations moved the plates, the signal was electronically amplified. Georg Neumann then had the idea to use gold plated PVC and design the mic such that as sound reached the back of the microphone it would be cancelled, resulting in a cardioid pattern, making it the first cardioid microphone and an ancestor to the microphone I’m using right now. In the meantime, other advancements were coming. Electromagnets made it possible to add moving coils and ribbons and Wente and A.C. Thuras would then invent the dynamic, or moving-coil microphone in 1931. This was much more of an omnidirectional pattern and It wasn’t until 1959 that the Unidyne III became the first mic to pull in sound from the top of the mic, which would change the shape and look of the microphone forever. Then in 1964 Bell Labs brought us the electrostatic transducer mic and the microphone exploded with over a billion of these built every year. Then Sennheiser gave us clip-on microphones in the 80s, calling their system the Mikroport and releasing it through Telefunken. No, Bootsie Collins was not a member of Telefunken. He’d been touring with James Brown for awhile ad by then was with the Parliament Funkadelic. Funk made a lot of use of all these innovations in sound though. So I see why you might be confused. Other than the fact that all of this was leading us up to a point of being able to use microphones in computers, where’s the connection? Well, remember Bell Labs? In 1962 they invented the electret microphone. Here the electrically biased diaphragms have a capacitor that changes with the vibrations of sound waves. Robert Noyce had given us the integrated circuit in 1959 and of microphones couldn’t escape the upcoming Moore’s law, as every electronics industry started looking for applications. Honeywell came along with silicon pressure sensors, and by 65 Harvey Nathanson gave us a resonant-gated transistors. That would be put on a Monolithic chip by 66 and through the 70s micro sensors were developed to isolate every imaginable environmental parameter, including sound. At this point, computers were still big hulking things. But computers and sound had been working their way into the world for a couple of decades. The technologies would evolve into one another at some point obviously. In 1951, Geoff Hill pushed pules to a speaker using the Australian CSIRAC and Max Mathews at Bell Labs had been doing sound generation on an IBM 704 using the MUSIC program, which went a step further and actually created digital audio using PCM, or Pulse-Code Modulation. The concept of sending multiplexed signals over a wire had started with the telegraph back in the 1870s but the facsimile, or fax machine, used it as far back as 1920. But the science and the math wasn’t explaining it all to allow for the computer to handle the rules required. It was Bernard Oliver and Claude Shannon that really put PCM on the map. We’ve mentioned Claude Shannon on the podcast before. He met Alan Turing in 43 and went on to write crazy papers like A Mathematical Theory of Cryptography, Communication Theory of Secrecy Systems, and A Mathematical Theory of Communications. And he helped birth the field of information theory. When the math nerds showed up, microphones got way cooler. By the way, he liked to juggle on a unicycle. I would too if I could. They documented that you could convert audio to digital by sampling audio and modulation would be mapping the audio on a sine wave at regular intervals. This analog-to-digital converter could then be printed on a chip that would output encoded digital data that would live on storage. Demodulate that with a digital to analog converter, apply an amplification, and you have the paradigm for computer sound. There’s way more, like anti-aliasing and reconstruction filters, but someone will always think you’re over-simplifying. So the evolutions came, giving us multi-track stereo casettes, the fax machines and eventually getting to the point that this recording will get exported into a 16-bit PCM wave file. PCM would end up evolving to LPCM, or Linear pulse-control modulation and be used in CDs, DVDs, and Blu-ray’s. Oh and lossleslly compressed to mp3, mpeg4, etc. By the 50s, MIT hackers would start producing sound and even use the computer to emit the same sounds Captain Crunch discovered the tone for, so they could make free phone calls. They used a lot of paper tape then, but with magnetic tape and then hard drives, computers would become more and more active in audio. By 61 John Kelly Jr and Carol Lockbaum made an IBM 7094 mainframe sing Daisy Bell. Arthur C. Clarke happened to see it and that made it into 2001: A Space Odyssey. Remember hearing it sing that when it was getting taken apart? But the digital era of sound recording is marked as starting with the explosion of Sony in the 1970s. Moore’s Law, they got smaller, faster, and cheaper and by the 2000s microelectromechanical microphones web mainstream, which are what are built into laptops, cell phones, and headsets. You see, by then it was all on a single chip. Or even shared a chip. These are still mostly omnidirectional. But in modern headphones, like Apple AirPods then you’re using dual beam forming microphones. Beamforming uses multiple sensor arrays to extract sounds based on a whole lot of math; the confluence of machine learning and the microphone. You see, humans have known to do many of these things for centuries. We hooked a cup to a wire and sound came out the other side. We electrified it. We then started going from engineering to pure science. We then analyzed it with all the math so we better understood the rules. And that last step is when it’s time to start writing software. Or sometimes it’s controlling things with software that gives us the necessary understanding to make the next innovative leap. The invention of the microphone doesn’t really belong to one person. Hook, Wheatstone, Reis, Alexander Graham Bell, Thomas Edison, Wente, Thuras, Shannon, Hill, Matthews, and many, many more had a hand in putting that crappy mic in your laptop, the really good mic in your cell phone, and the stupidly good mic in your headphones. Some are even starting to move over to Piezoelectric. But I think I’ll save that for another episode. The microphone is a great example of that slow, methodical rise, and iterative innovation that makes technologies truly lasting. It’s not always shockingly abrupt or disruptive. But those innovations are permanently world-changing. Just think, because of the microphone and computer getting together for a blind date in the 40s you can now record your hit album in Garage Band. For free. Or you call your parents any time you want. Now pretty much for free. So thank you for sticking with me through all of this. It’s been a blast. You should probably call your parents now. I’m sure they’d love to hear from you. But before you do, thank you for tuning in to yet another episode of the History of Computing Podcast. We’re so lucky to have you. Have a great day!

Visit the podcast's native language site