| <?xml version="1.0" encoding="UTF-8"?> |
| <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" |
| "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> |
| <html xmlns="http://www.w3.org/1999/xhtml"> |
| <head> |
| <meta http-equiv="Content-Type" content="text/html; charset=UTF-8" /> |
| <title>Web Audio API</title> |
| <meta name="revision" |
| content="$Id: Overview.html,v 1.4 2012/07/30 11:44:57 tmichel Exp $" /> |
| <link rel="stylesheet" href="style.css" type="text/css" /> |
| <!-- |
| <script src="section-links.js" type="application/ecmascript"></script> |
| <script src="dfn.js" type="application/ecmascript"></script> |
| --> |
| <!--[if IE]> |
| <style type='text/css'> |
| .ignore { |
| -ms-filter:"progid:DXImageTransform.Microsoft.Alpha(Opacity=50)"; |
| filter: alpha(opacity=50); |
| } |
| </style> |
| <![endif]--> |
| <link rel="stylesheet" href="//www.w3.org/StyleSheets/TR/W3C-ED" |
| type="text/css" /> |
| </head> |
| |
| <body> |
| |
| <div class="head"> |
| <p><a href="http://www.w3.org/"><img width="72" height="48" alt="W3C" |
| src="http://www.w3.org/Icons/w3c_home" /></a> </p> |
| |
| <h1 id="title" class="title">Web Audio API </h1> |
| |
| <h2 id="w3c-date-document"><acronym |
| title="World Wide Web Consortium">W3C</acronym> Editor's Draft |
| </h2> |
| <dl> |
| <dt>This version: </dt> |
| <dd><a |
| href="https://dvcs.w3.org/hg/audio/raw-file/tip/webaudio/specification.html">https://dvcs.w3.org/hg/audio/raw-file/tip/webaudio/specification.html</a> |
| </dd> |
| <dt>Latest published version: </dt> |
| <dd><a |
| href="http://www.w3.org/TR/webaudio/">http://www.w3.org/TR/webaudio/</a> |
| </dd> |
| <dt>Previous version: </dt> |
| <dd><a |
| href="http://www.w3.org/TR/2012/WD-webaudio-20120315/">http://www.w3.org/TR/2012/WD-webaudio-20120315/</a> |
| </dd> |
| </dl> |
| |
| <dl> |
| <dt>Editor: </dt> |
| <dd>Chris Rogers, Google <crogers@google.com></dd> |
| </dl> |
| |
| <p class="copyright"><a |
| href="http://www.w3.org/Consortium/Legal/ipr-notice#Copyright">Copyright</a> © |
| 2012 <a href="http://www.w3.org/"><acronym |
| title="World Wide Web Consortium">W3C</acronym></a><sup>®</sup> (<a |
| href="http://www.csail.mit.edu/"><acronym |
| title="Massachusetts Institute of Technology">MIT</acronym></a>, <a |
| href="http://www.ercim.eu/"><acronym |
| title="European Research Consortium for Informatics and Mathematics">ERCIM</acronym></a>, |
| <a href="http://www.keio.ac.jp/">Keio</a>), All Rights Reserved. W3C <a |
| href="http://www.w3.org/Consortium/Legal/ipr-notice#Legal_Disclaimer">liability</a>, |
| <a |
| href="http://www.w3.org/Consortium/Legal/ipr-notice#W3C_Trademarks">trademark</a> |
| and <a href="http://www.w3.org/Consortium/Legal/copyright-documents">document |
| use</a> rules apply.</p> |
| <hr /> |
| </div> |
| |
| <div id="abstract-section" class="section"> |
| <h2 id="abstract">Abstract</h2> |
| |
| <p>This specification describes a high-level JavaScript <acronym |
| title="Application Programming Interface">API</acronym> for processing and |
| synthesizing audio in web applications. The primary paradigm is of an audio |
| routing graph, where a number of <a |
| href="#AudioNode-section"><code>AudioNode</code></a> objects are connected |
| together to define the overall audio rendering. The actual processing will |
| primarily take place in the underlying implementation (typically optimized |
| Assembly / C / C++ code), but <a href="#JavaScriptProcessing-section">direct |
| JavaScript processing and synthesis</a> is also supported. </p> |
| |
| <p>The <a href="#introduction">introductory</a> section covers the motivation |
| behind this specification.</p> |
| |
| <p>This API is designed to be used in conjunction with other APIs and elements |
| on the web platform, notably: XMLHttpRequest |
| (using the <code>responseType</code> and <code>response</code> attributes). For |
| games and interactive applications, it is anticipated to be used with the |
| <code>canvas</code> 2D and WebGL 3D graphics APIs. </p> |
| </div> |
| |
| <div id="sotd-section" class="section"> |
| <h2 id="sotd">Status of this Document</h2> |
| |
| |
| <p><em>This section describes the status of this document at the time of its |
| publication. Other documents may supersede this document. A list of current W3C |
| publications and the latest revision of this technical report can be found in |
| the <a href="http://www.w3.org/TR/">W3C technical reports index</a> at |
| http://www.w3.org/TR/. </em></p> |
| |
| <p>This is the Editor's Draft of the <cite>Web Audio API</cite> |
| specification. It has been produced by the <a |
| href="http://www.w3.org/2011/audio/"><b>W3C Audio Working Group</b></a> , which |
| is part of the W3C WebApps Activity.</p> |
| |
| <p></p> |
| |
| <p>Please send comments about this document to <<a |
| href="mailto:public-audio@w3.org">public-audio@w3.org</a>> (<a |
| href="http://lists.w3.org/Archives/Public/public-audio/">public archives</a> of |
| the W3C audio mailing list). Web content and browser developers are encouraged |
| to review this draft. </p> |
| |
| <p>Publication as a Working Draft does not imply endorsement by the W3C |
| Membership. This is a draft document and may be updated, replaced or obsoleted |
| by other documents at any time. It is inappropriate to cite this document as |
| other than work in progress.</p> |
| |
| <p> This document was produced by a group operating under the <a href="http://www.w3.org/Consortium/Patent-Policy-20040205/">5 February 2004 W3C Patent Policy</a>. W3C maintains a <a rel="disclosure" href="http://www.w3.org/2004/01/pp-impl/46884/status">public list of any patent disclosures</a> made in connection with the deliverables of the group; that page also includes instructions for disclosing a patent. An individual who has actual knowledge of a patent which the individual believes contains <a href="http://www.w3.org/Consortium/Patent-Policy-20040205/#def-essential">Essential Claim(s)</a> must disclose the information in accordance with <a href="http://www.w3.org/Consortium/Patent-Policy-20040205/#sec-Disclosure">section 6 of the W3C Patent Policy</a>. </p> |
| </div> |
| |
| <div id="toc"> |
| <h2 id="L13522">Table of Contents</h2> |
| |
| <div class="toc"> |
| <ul> |
| <li><a href="#introduction">1. Introduction</a> |
| <ul> |
| <li><a href="#Features">1.1. Features</a></li> |
| <li><a href="#ModularRouting">1.2. Modular Routing</a></li> |
| <li><a href="#APIOverview">1.3. API Overview</a></li> |
| </ul> |
| </li> |
| <li><a href="#conformance">2. Conformance</a></li> |
| <li><a href="#API-section">4. The Audio API</a> |
| <ul> |
| <li><a href="#AudioContext-section">4.1. The AudioContext Interface</a> |
| <ul> |
| <li><a href="#attributes-AudioContext">4.1.1. Attributes</a></li> |
| <li><a href="#methodsandparams-AudioContext">4.1.2. Methods and |
| Parameters</a></li> |
| <li><a href="#lifetime-AudioContext">4.1.3. Lifetime</a></li> |
| </ul> |
| </li> |
| <li><a href="#OfflineAudioContext-section">4.1b. The OfflineAudioContext Interface</a> |
| </li> |
| |
| <li><a href="#AudioNode-section">4.2. The AudioNode Interface</a> |
| <ul> |
| <li><a href="#attributes-AudioNode">4.2.1. Attributes</a></li> |
| <li><a href="#methodsandparams-AudioNode">4.2.2. Methods and |
| Parameters</a></li> |
| <li><a href="#lifetime-AudioNode">4.2.3. Lifetime</a></li> |
| </ul> |
| </li> |
| <li><a href="#AudioDestinationNode">4.4. The AudioDestinationNode |
| Interface</a> |
| <ul> |
| <li><a href="#attributes-AudioDestinationNode">4.4.1. Attributes</a></li> |
| </ul> |
| </li> |
| <li><a href="#AudioParam">4.5. The AudioParam Interface</a> |
| <ul> |
| <li><a href="#attributes-AudioParam">4.5.1. Attributes</a></li> |
| <li><a href="#methodsandparams-AudioParam">4.5.2. Methods and |
| Parameters</a></li> |
| <li><a href="#computedValue-AudioParam-section">4.5.3. Computation of Value</a></li> |
| <li><a href="#example1-AudioParam-section">4.5.4. AudioParam Automation Example</a></li> |
| </ul> |
| </li> |
| <li><a href="#GainNode">4.7. The GainNode Interface</a> |
| <ul> |
| <li><a href="#attributes-GainNode">4.7.1. Attributes</a></li> |
| </ul> |
| </li> |
| <li><a href="#DelayNode">4.8. The DelayNode Interface</a> |
| <ul> |
| <li><a href="#attributes-GainNode_2">4.8.1. Attributes</a></li> |
| </ul> |
| </li> |
| <li><a href="#AudioBuffer">4.9. The AudioBuffer Interface</a> |
| <ul> |
| <li><a href="#attributes-AudioBuffer">4.9.1. Attributes</a></li> |
| <li><a href="#methodsandparams-AudioBuffer">4.9.2. Methods and |
| Parameters</a></li> |
| </ul> |
| </li> |
| <li><a href="#AudioBufferSourceNode">4.10. The AudioBufferSourceNode |
| Interface</a> |
| <ul> |
| <li><a href="#attributes-AudioBufferSourceNode">4.10.1. |
| Attributes</a></li> |
| <li><a href="#methodsandparams-AudioBufferSourceNode">4.10.2. Methods and |
| Parameters</a></li> |
| </ul> |
| </li> |
| <li><a href="#MediaElementAudioSourceNode">4.11. The |
| MediaElementAudioSourceNode Interface</a></li> |
| <li><a href="#ScriptProcessorNode">4.12. The ScriptProcessorNode |
| Interface</a> |
| <ul> |
| <li><a href="#attributes-ScriptProcessorNode">4.12.1. Attributes</a></li> |
| </ul> |
| </li> |
| <li><a href="#AudioProcessingEvent">4.13. The AudioProcessingEvent |
| Interface</a> |
| <ul> |
| <li><a href="#attributes-AudioProcessingEvent">4.13.1. Attributes</a></li> |
| </ul> |
| </li> |
| <li><a href="#PannerNode">4.14. The PannerNode Interface</a> |
| <ul> |
| <li><a href="#attributes-PannerNode_attributes">4.14.2. |
| Attributes</a></li> |
| <li><a href="#Methods_and_Parameters">4.14.3. Methods and |
| Parameters</a></li> |
| </ul> |
| </li> |
| <li><a href="#AudioListener">4.15. The AudioListener Interface</a> |
| <ul> |
| <li><a href="#attributes-AudioListener">4.15.1. Attributes</a></li> |
| <li><a href="#L15842">4.15.2. Methods and Parameters</a></li> |
| </ul> |
| </li> |
| <li><a href="#ConvolverNode">4.16. The ConvolverNode Interface</a> |
| <ul> |
| <li><a href="#attributes-ConvolverNode">4.16.1. Attributes</a></li> |
| </ul> |
| </li> |
| <li><a href="#AnalyserNode">4.17. The AnalyserNode |
| Interface</a> |
| <ul> |
| <li><a href="#attributes-ConvolverNode_2">4.17.1. Attributes</a></li> |
| <li><a href="#methods-and-parameters">4.17.2. Methods and |
| Parameters</a></li> |
| </ul> |
| </li> |
| <li><a href="#ChannelSplitterNode">4.18. The ChannelSplitterNode |
| Interface</a> |
| <ul> |
| <li><a href="#example-1">Example:</a></li> |
| </ul> |
| </li> |
| <li><a href="#ChannelMergerNode">4.19. The ChannelMergerNode Interface</a> |
| <ul> |
| <li><a href="#example-2">Example:</a></li> |
| </ul> |
| </li> |
| <li><a href="#DynamicsCompressorNode">4.20. The DynamicsCompressorNode |
| Interface</a> |
| <ul> |
| <li><a href="#attributes-DynamicsCompressorNode">4.20.1. |
| Attributes</a></li> |
| </ul> |
| </li> |
| <li><a href="#BiquadFilterNode">4.21. The BiquadFilterNode Interface</a> |
| <ul> |
| <li><a href="#BiquadFilterNode-description">4.21.1 Lowpass</a></li> |
| <li><a href="#HIGHPASS">4.21.2 Highpass</a></li> |
| <li><a href="#BANDPASS">4.21.3 Bandpass</a></li> |
| <li><a href="#LOWSHELF">4.21.4 Lowshelf</a></li> |
| <li><a href="#L16352">4.21.5 Highshelf</a></li> |
| <li><a href="#PEAKING">4.21.6 Peaking</a></li> |
| <li><a href="#NOTCH">4.21.7 Notch</a></li> |
| <li><a href="#ALLPASS">4.21.8 Allpass</a></li> |
| <li><a href="#Methods">4.21.9. Methods</a></li> |
| </ul> |
| </li> |
| <li><a href="#WaveShaperNode">4.22. The WaveShaperNode Interface</a> |
| <ul> |
| <li><a href="#attributes-WaveShaperNode">4.22.1. |
| Attributes</a></li> |
| </ul> |
| </li> |
| <li><a href="#OscillatorNode">4.23. The OscillatorNode Interface</a> |
| <ul> |
| <li><a href="#attributes-OscillatorNode">4.23.1. |
| Attributes</a></li> |
| <li><a href="#methodsandparams-OscillatorNode-section">4.23.2. Methods and |
| Parameters</a></li> |
| </ul> |
| </li> |
| <li><a href="#PeriodicWave">4.24. The PeriodicWave Interface</a> |
| </li> |
| <li><a href="#MediaStreamAudioSourceNode">4.25. The |
| MediaStreamAudioSourceNode Interface</a></li> |
| <li><a href="#MediaStreamAudioDestinationNode">4.26. The |
| MediaStreamAudioDestinationNode Interface</a></li> |
| </ul> |
| </li> |
| <li><a href="#MixerGainStructure">6. Mixer Gain Structure</a> |
| <ul> |
| <li><a href="#background">Background</a></li> |
| <li><a href="#SummingJunction">Summing Inputs</a></li> |
| <li><a href="#gain-Control">Gain Control</a></li> |
| <li><a href="#Example-mixer-with-send-busses">Example: Mixer with Send |
| Busses</a></li> |
| </ul> |
| </li> |
| <li><a href="#DynamicLifetime">7. Dynamic Lifetime</a> |
| <ul> |
| <li><a href="#DynamicLifetime-background">Background</a></li> |
| <li><a href="#Example-DynamicLifetime">Example</a></li> |
| </ul> |
| </li> |
| <li><a href="#UpMix">9. Channel up-mixing and down-mixing</a> |
| <ul> |
| <li><a href="#ChannelLayouts">9.1. Speaker Channel Layouts</a> |
| <ul> |
| <li><a href="#ChannelOrdering">9.1.1. Channel Ordering</a></li> |
| <li><a href="#UpMix-sub">9.1.2. Up Mixing</a></li> |
| <li><a href="#down-mix">9.1.3. Down Mixing</a></li> |
| </ul> |
| </li> |
| |
| <li><a href="#ChannelRules-section">9.2. Channel Rules Examples</a> |
| |
| </ul> |
| </li> |
| <li><a href="#Spatialization">11. Spatialization / Panning </a> |
| <ul> |
| <li><a href="#Spatialization-background">Background</a></li> |
| <li><a href="#Spatialization-panning-algorithm">Panning Algorithm</a></li> |
| <li><a href="#Spatialization-distance-effects">Distance Effects</a></li> |
| <li><a href="#Spatialization-sound-cones">Sound Cones</a></li> |
| <li><a href="#Spatialization-doppler-shift">Doppler Shift</a></li> |
| </ul> |
| </li> |
| <li><a href="#Convolution">12. Linear Effects using Convolution</a> |
| <ul> |
| <li><a href="#Convolution-background">Background</a></li> |
| <li><a href="#Convolution-motivation">Motivation for use as a |
| Standard</a></li> |
| <li><a href="#Convolution-implementation-guide">Implementation Guide</a></li> |
| <li><a href="#Convolution-reverb-effect">Reverb Effect (with |
| matrixing)</a></li> |
| <li><a href="#recording-impulse-responses">Recording Impulse |
| Responses</a></li> |
| <li><a href="#tools">Tools</a></li> |
| <li><a href="#recording-setup">Recording Setup</a></li> |
| <li><a href="#warehouse">The Warehouse Space</a></li> |
| </ul> |
| </li> |
| <li><a href="#JavaScriptProcessing">13. JavaScript Synthesis and |
| Processing</a> |
| <ul> |
| <li><a href="#custom-DSP-effects">Custom DSP Effects</a></li> |
| <li><a href="#educational-applications">Educational Applications</a></li> |
| <li><a href="#javaScript-performance">JavaScript Performance</a></li> |
| </ul> |
| </li> |
| <li><a href="#Performance">15. Performance Considerations</a> |
| <ul> |
| <li><a href="#Latency">15.1. Latency: What it is and Why it's |
| Important</a></li> |
| <li><a href="#audio-glitching">15.2. Audio Glitching</a></li> |
| <li><a href="#hardware-scalability">15.3. Hardware Scalability</a> |
| <ul> |
| <li><a href="#CPU-monitoring">15.3.1. CPU monitoring</a></li> |
| <li><a href="#Voice-dropping">15.3.2. Voice Dropping</a></li> |
| <li><a href="#Simplification-of-Effects-Processing">15.3.3. |
| Simplification of Effects Processing</a></li> |
| <li><a href="#Sample-rate">15.3.4. Sample Rate</a></li> |
| <li><a href="#pre-flighting">15.3.5. Pre-flighting</a></li> |
| <li><a href="#Authoring-for-different-user-agents">15.3.6. Authoring |
| for different user agents</a></li> |
| <li><a href="#Scalability-of-Direct-JavaScript-Synthesis">15.3.7. |
| Scalability of Direct JavaScript Synthesis / Processing</a></li> |
| </ul> |
| </li> |
| <li><a href="#JavaScriptPerformance">15.4. JavaScript Issues with |
| real-time Processing and Synthesis: </a></li> |
| </ul> |
| </li> |
| <li><a href="#ExampleApplications">16. Example Applications</a> |
| <ul> |
| <li><a href="#basic-sound-playback">Basic Sound Playback</a></li> |
| <li><a href="#threeD-environmentse-and-games">3D Environments and |
| Games</a></li> |
| <li><a href="#musical-applications">Musical Applications</a></li> |
| <li><a href="#music-visualizers">Music Visualizers</a></li> |
| <li><a href="#educational-applications_2">Educational |
| Applications</a></li> |
| <li><a href="#artistic-audio-exploration">Artistic Audio |
| Exploration</a></li> |
| </ul> |
| </li> |
| <li><a href="#SecurityConsiderations">17. Security Considerations</a></li> |
| <li><a href="#PrivacyConsiderations">18. Privacy Considerations</a></li> |
| <li><a href="#requirements">19. Requirements and Use Cases</a></li> |
| <li><a href="#OldNames">20. Old Names</a></li> |
| <li><a href="#L17310">A.References</a> |
| <ul> |
| <li><a href="#Normative-references">A.1 Normative references</a></li> |
| <li><a href="#Informative-references">A.2 Informative references</a></li> |
| </ul> |
| </li> |
| <li><a href="#L17335">B.Acknowledgements</a></li> |
| <li><a href="#ChangeLog">C. Web Audio API Change Log</a></li> |
| </ul> |
| </div> |
| </div> |
| |
| <div id="sections"> |
| |
| <div id="div-introduction" class="section"> |
| <h2 id="introduction">1. Introduction</h2> |
| |
| <p class="norm">This section is informative.</p> |
| |
| <p>Audio on the web has been fairly primitive up to this point and until very |
| recently has had to be delivered through plugins such as Flash and QuickTime. |
| The introduction of the <code>audio</code> element in HTML5 is very important, |
| allowing for basic streaming audio playback. But, it is not powerful enough to |
| handle more complex audio applications. For sophisticated web-based games or |
| interactive applications, another solution is required. It is a goal of this |
| specification to include the capabilities found in modern game audio engines as |
| well as some of the mixing, processing, and filtering tasks that are found in |
| modern desktop audio production applications. </p> |
| |
| <p>The APIs have been designed with a wide variety of <a |
| href="#ExampleApplications-section">use cases</a> in mind. Ideally, it should |
| be able to support <i>any</i> use case which could reasonably be implemented |
| with an optimized C++ engine controlled via JavaScript and run in a browser. |
| That said, modern desktop audio software can have very advanced capabilities, |
| some of which would be difficult or impossible to build with this system. |
| Apple's Logic Audio is one such application which has support for external MIDI |
| controllers, arbitrary plugin audio effects and synthesizers, highly optimized |
| direct-to-disk audio file reading/writing, tightly integrated time-stretching, |
| and so on. Nevertheless, the proposed system will be quite capable of |
| supporting a large range of reasonably complex games and interactive |
| applications, including musical ones. And it can be a very good complement to |
| the more advanced graphics features offered by WebGL. The API has been designed |
| so that more advanced capabilities can be added at a later time. </p> |
| |
| <div id="Features-section" class="section"> |
| <h2 id="Features">1.1. Features</h2> |
| </div> |
| |
| <p>The API supports these primary features: </p> |
| <ul> |
| <li><a href="#ModularRouting-section">Modular routing</a> for simple or |
| complex mixing/effect architectures, including <a |
| href="#MixerGainStructure-section">multiple sends and submixes</a>.</li> |
| <li><a href="#AudioParam">Sample-accurate scheduled sound |
| playback</a> with low <a href="#Latency-section">latency</a> for musical |
| applications requiring a very high degree of rhythmic precision such as |
| drum machines and sequencers. This also includes the possibility of <a |
| href="#DynamicLifetime-section">dynamic creation</a> of effects. </li> |
| <li>Automation of audio parameters for envelopes, fade-ins / fade-outs, |
| granular effects, filter sweeps, LFOs etc. </li> |
| <li>Flexible handling of channels in an audio stream, allowing them to be split and merged.</li> |
| |
| <li>Processing of audio sources from an <code>audio</code> or |
| <code>video</code> <a href="#MediaElementAudioSourceNode">media |
| element</a>. </li> |
| |
| <li>Processing live audio input using a <a href="#MediaStreamAudioSourceNode">MediaStream</a> |
| from getUserMedia(). |
| </li> |
| |
| <li>Integration with WebRTC |
| <ul> |
| |
| |
| <li>Processing audio received from a remote peer using a <a href="#MediaStreamAudioSourceNode">MediaStream</a>. |
| </li> |
| |
| <li>Sending a generated or processed audio stream to a remote peer using a <a href="#MediaStreamAudioDestinationNode">MediaStream</a>. |
| </li> |
| |
| </ul> |
| </li> |
| |
| <li>Audio stream synthesis and processing <a |
| href="#JavaScriptProcessing-section">directly in JavaScript</a>. </li> |
| <li><a href="#Spatialization-section">Spatialized audio</a> supporting a wide |
| range of 3D games and immersive environments: |
| <ul> |
| <li>Panning models: equal-power, HRTF, pass-through </li> |
| <li>Distance Attenuation </li> |
| <li>Sound Cones </li> |
| <li>Obstruction / Occlusion </li> |
| <li>Doppler Shift </li> |
| <li>Source / Listener based</li> |
| </ul> |
| </li> |
| <li>A <a href="#Convolution-section">convolution engine</a> for a wide range |
| of linear effects, especially very high-quality room effects. Here are some |
| examples of possible effects: |
| <ul> |
| <li>Small / large room </li> |
| <li>Cathedral </li> |
| <li>Concert hall </li> |
| <li>Cave </li> |
| <li>Tunnel </li> |
| <li>Hallway </li> |
| <li>Forest </li> |
| <li>Amphitheater </li> |
| <li>Sound of a distant room through a doorway </li> |
| <li>Extreme filters</li> |
| <li>Strange backwards effects</li> |
| <li>Extreme comb filter effects </li> |
| </ul> |
| </li> |
| <li>Dynamics compression for overall control and sweetening of the mix </li> |
| <li>Efficient <a href="#AnalyserNode">real-time time-domain and |
| frequency analysis / music visualizer support</a></li> |
| <li>Efficient biquad filters for lowpass, highpass, and other common filters. |
| </li> |
| <li>A Waveshaping effect for distortion and other non-linear effects</li> |
| <li>Oscillators</li> |
| |
| </ul> |
| |
| <div id="ModularRouting-section"> |
| <h2 id="ModularRouting">1.2. Modular Routing</h2> |
| |
| <p>Modular routing allows arbitrary connections between different <a |
| href="#AudioNode-section"><code>AudioNode</code></a> objects. Each node can |
| have <dfn>inputs</dfn> and/or <dfn>outputs</dfn>. A <dfn>source node</dfn> has no inputs |
| and a single output. A <dfn>destination node</dfn> has |
| one input and no outputs, the most common example being <a |
| href="#AudioDestinationNode-section"><code>AudioDestinationNode</code></a> the final destination to the audio |
| hardware. Other nodes such as filters can be placed between the source and destination nodes. |
| The developer doesn't have to worry about low-level stream format details |
| when two objects are connected together; <a href="#UpMix-section">the right |
| thing just happens</a>. For example, if a mono audio stream is connected to a |
| stereo input it should just mix to left and right channels <a |
| href="#UpMix-section">appropriately</a>. </p> |
| |
| <p>In the simplest case, a single source can be routed directly to the output. |
| All routing occurs within an <a |
| href="#AudioContext-section"><code>AudioContext</code></a> containing a single |
| <a href="#AudioDestinationNode-section"><code>AudioDestinationNode</code></a>: |
| </p> |
| <img alt="modular routing" src="images/modular-routing1.png" /> |
| |
| <p>Illustrating this simple routing, here's a simple example playing a single |
| sound: </p> |
| |
| <div class="block"> |
| |
| <div class="blockTitleDiv"> |
| <span class="blockTitle">ECMAScript</span> </div> |
| |
| <div class="blockContent"> |
| <pre class="code"><code class="es-code"> |
| |
| var context = new AudioContext(); |
| |
| function playSound() { |
| var source = context.createBufferSource(); |
| source.buffer = dogBarkingBuffer; |
| source.connect(context.destination); |
| source.start(0); |
| } |
| </code></pre> |
| </div> |
| </div> |
| |
| <p>Here's a more complex example with three sources and a convolution reverb |
| send with a dynamics compressor at the final output stage: </p> |
| <img alt="modular routing2" src="images/modular-routing2.png" /> |
| |
| <div class="example"> |
| |
| <div class="exampleHeader"> |
| Example</div> |
| |
| <div class="block"> |
| |
| <div class="blockTitleDiv"> |
| <span class="blockTitle">ECMAScript</span></div> |
| |
| <div class="blockContent"> |
| <pre class="code"><code class="es-code"> |
| |
| var context = 0; |
| var compressor = 0; |
| var reverb = 0; |
| |
| var source1 = 0; |
| var source2 = 0; |
| var source3 = 0; |
| |
| var lowpassFilter = 0; |
| var waveShaper = 0; |
| var panner = 0; |
| |
| var dry1 = 0; |
| var dry2 = 0; |
| var dry3 = 0; |
| |
| var wet1 = 0; |
| var wet2 = 0; |
| var wet3 = 0; |
| |
| var masterDry = 0; |
| var masterWet = 0; |
| |
| function setupRoutingGraph () { |
| context = new AudioContext(); |
| |
| // Create the effects nodes. |
| lowpassFilter = context.createBiquadFilter(); |
| waveShaper = context.createWaveShaper(); |
| panner = context.createPanner(); |
| compressor = context.createDynamicsCompressor(); |
| reverb = context.createConvolver(); |
| |
| // Create master wet and dry. |
| masterDry = context.createGain(); |
| masterWet = context.createGain(); |
| |
| // Connect final compressor to final destination. |
| compressor.connect(context.destination); |
| |
| // Connect master dry and wet to compressor. |
| masterDry.connect(compressor); |
| masterWet.connect(compressor); |
| |
| // Connect reverb to master wet. |
| reverb.connect(masterWet); |
| |
| // Create a few sources. |
| source1 = context.createBufferSource(); |
| source2 = context.createBufferSource(); |
| source3 = context.createOscillator(); |
| |
| source1.buffer = manTalkingBuffer; |
| source2.buffer = footstepsBuffer; |
| source3.frequency.value = 440; |
| |
| // Connect source1 |
| dry1 = context.createGain(); |
| wet1 = context.createGain(); |
| source1.connect(lowpassFilter); |
| lowpassFilter.connect(dry1); |
| lowpassFilter.connect(wet1); |
| dry1.connect(masterDry); |
| wet1.connect(reverb); |
| |
| // Connect source2 |
| dry2 = context.createGain(); |
| wet2 = context.createGain(); |
| source2.connect(waveShaper); |
| waveShaper.connect(dry2); |
| waveShaper.connect(wet2); |
| dry2.connect(masterDry); |
| wet2.connect(reverb); |
| |
| // Connect source3 |
| dry3 = context.createGain(); |
| wet3 = context.createGain(); |
| source3.connect(panner); |
| panner.connect(dry3); |
| panner.connect(wet3); |
| dry3.connect(masterDry); |
| wet3.connect(reverb); |
| |
| // Start the sources now. |
| source1.start(0); |
| source2.start(0); |
| source3.start(0); |
| } |
| </code></pre> |
| </div> |
| </div> |
| </div> |
| </div> |
| |
| </div> |
| |
| <div id="APIOverview-section" class="section"> |
| <h2 id="APIOverview">1.3. API Overview</h2> |
| </div> |
| |
| <p>The interfaces defined are: </p> |
| <ul> |
| <li>An <a class="dfnref" href="#AudioContext-section">AudioContext</a> |
| interface, which contains an audio signal graph representing connections |
| betweens AudioNodes. </li> |
| <li>An <a class="dfnref" href="#AudioNode-section">AudioNode</a> interface, |
| which represents audio sources, audio outputs, and intermediate processing |
| modules. AudioNodes can be dynamically connected together in a <a |
| href="#ModularRouting-section">modular fashion</a>. <code>AudioNodes</code> |
| exist in the context of an <code>AudioContext</code> </li> |
| <li>An <a class="dfnref" |
| href="#AudioDestinationNode-section">AudioDestinationNode</a> interface, an |
| AudioNode subclass representing the final destination for all rendered |
| audio. </li> |
| <li>An <a class="dfnref" href="#AudioBuffer-section">AudioBuffer</a> |
| interface, for working with memory-resident audio assets. These can |
| represent one-shot sounds, or longer audio clips. </li> |
| <li>An <a class="dfnref" |
| href="#AudioBufferSourceNode-section">AudioBufferSourceNode</a> interface, |
| an AudioNode which generates audio from an AudioBuffer. </li> |
| <li>A <a class="dfnref" |
| href="#MediaElementAudioSourceNode-section">MediaElementAudioSourceNode</a> |
| interface, an AudioNode which is the audio source from an |
| <code>audio</code>, <code>video</code>, or other media element. </li> |
| <li>A <a class="dfnref" |
| href="#MediaStreamAudioSourceNode-section">MediaStreamAudioSourceNode</a> |
| interface, an AudioNode which is the audio source from a |
| MediaStream such as live audio input, or from a remote peer. </li> |
| <li>A <a class="dfnref" |
| href="#MediaStreamAudioDestinationNode-section">MediaStreamAudioDestinationNode</a> |
| interface, an AudioNode which is the audio destination to a |
| MediaStream sent to a remote peer. </li> |
| <li>A <a class="dfnref" |
| href="#ScriptProcessorNode-section">ScriptProcessorNode</a> interface, an |
| AudioNode for generating or processing audio directly in JavaScript. </li> |
| <li>An <a class="dfnref" |
| href="#AudioProcessingEvent-section">AudioProcessingEvent</a> interface, |
| which is an event type used with <code>ScriptProcessorNode</code> objects. |
| </li> |
| <li>An <a class="dfnref" href="#AudioParam-section">AudioParam</a> interface, |
| for controlling an individual aspect of an AudioNode's functioning, such as |
| volume. </li> |
| <li>An <a class="dfnref" href="#GainNode-section">GainNode</a> |
| interface, for explicit gain control. Because inputs to AudioNodes support |
| multiple connections (as a unity-gain summing junction), mixers can be <a |
| href="#MixerGainStructure-section">easily built</a> with GainNodes. |
| </li> |
| <li>A <a class="dfnref" href="#BiquadFilterNode-section">BiquadFilterNode</a> |
| interface, an AudioNode for common low-order filters such as: |
| <ul> |
| <li>Low Pass</li> |
| <li>High Pass </li> |
| <li>Band Pass </li> |
| <li>Low Shelf </li> |
| <li>High Shelf </li> |
| <li>Peaking </li> |
| <li>Notch </li> |
| <li>Allpass </li> |
| </ul> |
| </li> |
| <li>A <a class="dfnref" href="#DelayNode-section">DelayNode</a> interface, an |
| AudioNode which applies a dynamically adjustable variable delay. </li> |
| <li>An <a class="dfnref" href="#PannerNode-section">PannerNode</a> |
| interface, for spatializing / positioning audio in 3D space. </li> |
| <li>An <a class="dfnref" href="#AudioListener-section">AudioListener</a> |
| interface, which works with an <code>PannerNode</code> for |
| spatialization. </li> |
| <li>A <a class="dfnref" href="#ConvolverNode-section">ConvolverNode</a> |
| interface, an AudioNode for applying a <a |
| href="#Convolution-section">real-time linear effect</a> (such as the sound |
| of a concert hall). </li> |
| <li>A <a class="dfnref" |
| href="#AnalyserNode-section">AnalyserNode</a> interface, |
| for use with music visualizers, or other visualization applications. </li> |
| <li>A <a class="dfnref" |
| href="#ChannelSplitterNode-section">ChannelSplitterNode</a> interface, |
| for accessing the individual channels of an audio stream in the routing |
| graph. </li> |
| <li>A <a class="dfnref" |
| href="#ChannelMergerNode-section">ChannelMergerNode</a> interface, for |
| combining channels from multiple audio streams into a single audio stream. |
| </li> |
| <li>A <a |
| href="#DynamicsCompressorNode-section">DynamicsCompressorNode</a> interface, an |
| AudioNode for dynamics compression. </li> |
| <li>A <a class="dfnref" href="#dfn-WaveShaperNode">WaveShaperNode</a> |
| interface, an AudioNode which applies a non-linear waveshaping effect for |
| distortion and other more subtle warming effects. </li> |
| <li>A <a class="dfnref" href="#dfn-OscillatorNode">OscillatorNode</a> |
| interface, an audio source generating a periodic waveform. </li> |
| </ul> |
| </div> |
| |
| <div id="conformance-section" class="section"> |
| <h2 id="conformance">2. Conformance</h2> |
| |
| <p>Everything in this specification is normative except for examples and |
| sections marked as being informative. </p> |
| |
| <p>The keywords “<span class="rfc2119">MUST</span>”, “<span |
| class="rfc2119">MUST NOT</span>”, “<span |
| class="rfc2119">REQUIRED</span>”, “<span class="rfc2119">SHALL</span>”, |
| “<span class="rfc2119">SHALL NOT</span>”, “<span |
| class="rfc2119">RECOMMENDED</span>”, “<span class="rfc2119">MAY</span>” |
| and “<span class="rfc2119">OPTIONAL</span>” in this document are to be |
| interpreted as described in <cite><a href="http://www.ietf.org/rfc/rfc2119">Key |
| words for use in RFCs to Indicate Requirement Levels</a></cite> <a |
| href="#RFC2119">[RFC2119]</a>. </p> |
| |
| <p>The following conformance classes are defined by this specification: </p> |
| <dl> |
| <dt><dfn id="dfn-conforming-implementation">conforming |
| implementation</dfn></dt> |
| <dd><p>A user agent is considered to be a <a class="dfnref" |
| href="#dfn-conforming-implementation">conforming implementation</a> if it |
| satisfies all of the <span class="rfc2119">MUST</span>-, <span |
| class="rfc2119">REQUIRED</span>- and <span |
| class="rfc2119">SHALL</span>-level criteria in this specification that |
| apply to implementations. </p> |
| </dd> |
| </dl> |
| </div> |
| |
| <div id="terminology-section" class="section"> |
| |
| <div id="API-section-section" class="section"> |
| <h2 id="API-section">4. The Audio API</h2> |
| </div> |
| |
| <div id="AudioContext-section-section" class="section"> |
| <h2 id="AudioContext-section">4.1. The AudioContext Interface</h2> |
| |
| <p>This interface represents a set of <a |
| href="#AudioNode-section"><code>AudioNode</code></a> objects and their |
| connections. It allows for arbitrary routing of signals to the <a |
| href="#AudioDestinationNode-section"><code>AudioDestinationNode</code></a> |
| (what the user ultimately hears). Nodes are created from the context and are |
| then <a href="#ModularRouting-section">connected</a> together. In most use |
| cases, only a single AudioContext is used per document.</p> |
| |
| <br> |
| |
| <div class="block"> |
| |
| <div class="blockTitleDiv"> |
| <span class="blockTitle">Web IDL</span></div> |
| |
| <div class="blockContent"> |
| <pre class="code"><code class="idl-code" id="audio-context-idl"> |
| |
| callback DecodeSuccessCallback = void (AudioBuffer decodedData); |
| callback DecodeErrorCallback = void (); |
| |
| [Constructor] |
| interface <dfn id="dfn-AudioContext">AudioContext</dfn> : EventTarget { |
| |
| readonly attribute AudioDestinationNode destination; |
| readonly attribute float sampleRate; |
| readonly attribute double currentTime; |
| readonly attribute AudioListener listener; |
| |
| AudioBuffer createBuffer(unsigned long numberOfChannels, unsigned long length, float sampleRate); |
| |
| void decodeAudioData(ArrayBuffer audioData, |
| DecodeSuccessCallback successCallback, |
| optional DecodeErrorCallback errorCallback); |
| |
| |
| <span class="comment">// AudioNode creation </span> |
| AudioBufferSourceNode createBufferSource(); |
| |
| MediaElementAudioSourceNode createMediaElementSource(HTMLMediaElement mediaElement); |
| |
| MediaStreamAudioSourceNode createMediaStreamSource(MediaStream mediaStream); |
| MediaStreamAudioDestinationNode createMediaStreamDestination(); |
| |
| ScriptProcessorNode createScriptProcessor(optional unsigned long bufferSize = 0, |
| optional unsigned long numberOfInputChannels = 2, |
| optional unsigned long numberOfOutputChannels = 2); |
| |
| AnalyserNode createAnalyser(); |
| GainNode createGain(); |
| DelayNode createDelay(optional double maxDelayTime = 1.0); |
| BiquadFilterNode createBiquadFilter(); |
| WaveShaperNode createWaveShaper(); |
| PannerNode createPanner(); |
| ConvolverNode createConvolver(); |
| |
| ChannelSplitterNode createChannelSplitter(optional unsigned long numberOfOutputs = 6); |
| ChannelMergerNode createChannelMerger(optional unsigned long numberOfInputs = 6); |
| |
| DynamicsCompressorNode createDynamicsCompressor(); |
| |
| OscillatorNode createOscillator(); |
| PeriodicWave createPeriodicWave(Float32Array real, Float32Array imag); |
| |
| }; |
| </code></pre> |
| </div> |
| </div> |
| |
| <div id="attributes-AudioContext-section" class="section"> |
| <h3 id="attributes-AudioContext">4.1.1. Attributes</h3> |
| <dl> |
| <dt id="dfn-destination"><code>destination</code></dt> |
| <dd><p>An <a |
| href="#AudioDestinationNode-section"><code>AudioDestinationNode</code></a> |
| with a single input representing the final destination for all audio. |
| Usually this will represent the actual audio hardware. |
| All AudioNodes actively rendering |
| audio will directly or indirectly connect to <code>destination</code>.</p> |
| </dd> |
| </dl> |
| <dl> |
| <dt id="dfn-sampleRate"><code>sampleRate</code></dt> |
| <dd><p>The sample rate (in sample-frames per second) at which the |
| AudioContext handles audio. It is assumed that all AudioNodes in the |
| context run at this rate. In making this assumption, sample-rate |
| converters or "varispeed" processors are not supported in real-time |
| processing.</p> |
| </dd> |
| </dl> |
| <dl> |
| <dt id="dfn-currentTime"><code>currentTime</code></dt> |
| <dd><p>This is a time in seconds which starts at zero when the context is |
| created and increases in real-time. All scheduled times are relative to |
| it. This is not a "transport" time which can be started, paused, and |
| re-positioned. It is always moving forward. A GarageBand-like timeline |
| transport system can be very easily built on top of this (in JavaScript). |
| This time corresponds to an ever-increasing hardware timestamp. </p> |
| </dd> |
| </dl> |
| <dl> |
| <dt id="dfn-listener"><code>listener</code></dt> |
| <dd><p>An <a href="#AudioListener-section"><code>AudioListener</code></a> |
| which is used for 3D <a |
| href="#Spatialization-section">spatialization</a>.</p> |
| </dd> |
| </dl> |
| </div> |
| |
| <div id="methodsandparams-AudioContext-section" class="section"> |
| <h3 id="methodsandparams-AudioContext">4.1.2. Methods and Parameters</h3> |
| <dl> |
| <dt id="dfn-createBuffer">The <code>createBuffer</code> method</dt> |
| <dd><p>Creates an AudioBuffer of the given size. The audio data in the |
| buffer will be zero-initialized (silent). An NOT_SUPPORTED_ERR exception will be thrown if |
| the <code>numberOfChannels</code> or <code>sampleRate</code> are out-of-bounds, |
| or if length is 0.</p> |
| <p>The <dfn id="dfn-numberOfChannels">numberOfChannels</dfn> parameter |
| determines how many channels the buffer will have. An implementation must support at least 32 channels. </p> |
| <p>The <dfn id="dfn-length">length</dfn> parameter determines the size of |
| the buffer in sample-frames. </p> |
| <p>The <dfn id="dfn-sampleRate_2">sampleRate</dfn> parameter describes |
| the sample-rate of the linear PCM audio data in the buffer in |
| sample-frames per second. An implementation must support sample-rates in at least the range 22050 to 96000.</p> |
| </dd> |
| </dl> |
| <dl> |
| <dt id="dfn-decodeAudioData">The <code>decodeAudioData</code> method</dt> |
| <dd><p>Asynchronously decodes the audio file data contained in the |
| ArrayBuffer. The ArrayBuffer can, for example, be loaded from an XMLHttpRequest's |
| <code>response</code> attribute after setting the <code>responseType</code> to "arraybuffer". |
| Audio file data can be in any of the |
| formats supported by the <code>audio</code> element. </p> |
| <p><dfn id="dfn-audioData">audioData</dfn> is an ArrayBuffer containing |
| audio file data.</p> |
| <p><dfn id="dfn-successCallback">successCallback</dfn> is a callback |
| function which will be invoked when the decoding is finished. The single |
| argument to this callback is an AudioBuffer representing the decoded PCM |
| audio data.</p> |
| <p><dfn id="dfn-errorCallback">errorCallback</dfn> is a callback function |
| which will be invoked if there is an error decoding the audio file |
| data.</p> |
| |
| <p> |
| The following steps must be performed: |
| </p> |
| <ol> |
| |
| <li>Temporarily neuter the <dfn>audioData</dfn> ArrayBuffer in such a way that JavaScript code may not |
| access or modify the data.</li> |
| <li>Queue a decoding operation to be performed on another thread.</li> |
| <li>The decoding thread will attempt to decode the encoded <dfn>audioData</dfn> into linear PCM. |
| If a decoding error is encountered due to the audio format not being recognized or supported, or |
| because of corrupted/unexpected/inconsistent data then the <dfn>audioData</dfn> neutered state |
| will be restored to normal and the <dfn>errorCallback</dfn> will be |
| scheduled to run on the main thread's event loop and these steps will be terminated.</li> |
| <li>The decoding thread will take the result, representing the decoded linear PCM audio data, |
| and resample it to the sample-rate of the AudioContext if it is different from the sample-rate |
| of <dfn>audioData</dfn>. The final result (after possibly sample-rate converting) will be stored |
| in an AudioBuffer. |
| </li> |
| <li>The <dfn>audioData</dfn> neutered state will be restored to normal |
| </li> |
| <li> |
| The <dfn>successCallback</dfn> function will be scheduled to run on the main thread's event loop |
| given the AudioBuffer from step (4) as an argument. |
| </li> |
| </ol> |
| </dd> |
| </dl> |
| <dl> |
| <dt id="dfn-createBufferSource">The <code>createBufferSource</code> |
| method</dt> |
| <dd><p>Creates an <a |
| href="#AudioBufferSourceNode-section"><code>AudioBufferSourceNode</code></a>.</p> |
| </dd> |
| </dl> |
| <dl> |
| <dt id="dfn-createMediaElementSource">The <code>createMediaElementSource</code> |
| method</dt> |
| <dd><p>Creates a <a |
| href="#MediaElementAudioSourceNode-section"><code>MediaElementAudioSourceNode</code></a> given an HTMLMediaElement. |
| As a consequence of calling this method, audio playback from the HTMLMediaElement will be re-routed |
| into the processing graph of the AudioContext.</p> |
| </dd> |
| </dl> |
| <dl> |
| <dt id="dfn-createMediaStreamSource">The <code>createMediaStreamSource</code> |
| method</dt> |
| <dd><p>Creates a <a |
| href="#MediaStreamAudioSourceNode-section"><code>MediaStreamAudioSourceNode</code></a> given a MediaStream. |
| As a consequence of calling this method, audio playback from the MediaStream will be re-routed |
| into the processing graph of the AudioContext.</p> |
| </dd> |
| </dl> |
| |
| <dl> |
| <dt id="dfn-createMediaStreamDestination">The <code>createMediaStreamDestination</code> |
| method</dt> |
| <dd><p>Creates a <a |
| href="#MediaStreamAudioDestinationNode-section"><code>MediaStreamAudioDestinationNode</code></a>. |
| </p> |
| </dd> |
| </dl> |
| |
| <dl> |
| <dt id="dfn-createScriptProcessor">The <code>createScriptProcessor</code> |
| method</dt> |
| <dd><p>Creates a <a |
| href="#ScriptProcessorNode"><code>ScriptProcessorNode</code></a> for |
| direct audio processing using JavaScript. An INDEX_SIZE_ERR exception MUST be thrown if <code>bufferSize</code> or <code>numberOfInputChannels</code> or <code>numberOfOutputChannels</code> |
| are outside the valid range. </p> |
| <p>The <dfn id="dfn-bufferSize">bufferSize</dfn> parameter determines the |
| buffer size in units of sample-frames. If it's not passed in, or if the |
| value is 0, then the implementation will choose the best buffer size for |
| the given environment, which will be constant power of 2 throughout the lifetime |
| of the node. Otherwise if the author explicitly specifies the bufferSize, |
| it must be one of the following values: 256, 512, 1024, 2048, 4096, 8192, |
| 16384. This value controls how |
| frequently the <code>audioprocess</code> event is dispatched and |
| how many sample-frames need to be processed each call. Lower values for |
| <code>bufferSize</code> will result in a lower (better) <a |
| href="#Latency-section">latency</a>. Higher values will be necessary to |
| avoid audio breakup and <a href="#Glitching-section">glitches</a>. |
| It is recommended for authors to not specify this buffer size and allow |
| the implementation to pick a good buffer size to balance between latency |
| and audio quality. |
| </p> |
| <p>The <dfn id="dfn-numberOfInputChannels">numberOfInputChannels</dfn> parameter (defaults to 2) and |
| determines the number of channels for this node's input. Values of up to 32 must be supported. </p> |
| <p>The <dfn id="dfn-numberOfOutputChannels">numberOfOutputChannels</dfn> parameter (defaults to 2) and |
| determines the number of channels for this node's output. Values of up to 32 must be supported.</p> |
| <p>It is invalid for both <code>numberOfInputChannels</code> and |
| <code>numberOfOutputChannels</code> to be zero. </p> |
| </dd> |
| </dl> |
| <dl> |
| <dt id="dfn-createAnalyser">The <code>createAnalyser</code> method</dt> |
| <dd><p>Creates a <a |
| href="#AnalyserNode-section"><code>AnalyserNode</code></a>.</p> |
| </dd> |
| </dl> |
| <dl> |
| <dt id="dfn-createGain">The <code>createGain</code> method</dt> |
| <dd><p>Creates a <a |
| href="#GainNode-section"><code>GainNode</code></a>.</p> |
| </dd> |
| </dl> |
| <dl> |
| <dt id="dfn-createDelay">The <code>createDelay</code> method</dt> |
| <dd><p>Creates a <a href="#DelayNode-section"><code>DelayNode</code></a> |
| representing a variable delay line. The initial default delay time will |
| be 0 seconds.</p> |
| <p>The <dfn id="dfn-maxDelayTime">maxDelayTime</dfn> parameter is |
| optional and specifies the maximum delay time in seconds allowed for the delay line. If specified, this value MUST be |
| greater than zero and less than three minutes or a NOT_SUPPORTED_ERR exception will be thrown.</p> |
| </dd> |
| </dl> |
| <dl> |
| <dt id="dfn-createBiquadFilter">The <code>createBiquadFilter</code> |
| method</dt> |
| <dd><p>Creates a <a |
| href="#BiquadFilterNode-section"><code>BiquadFilterNode</code></a> |
| representing a second order filter which can be configured as one of |
| several common filter types.</p> |
| </dd> |
| </dl> |
| <dl> |
| <dt id="dfn-createWaveShaper">The <code>createWaveShaper</code> |
| method</dt> |
| <dd><p>Creates a <a |
| href="#WaveShaperNode-section"><code>WaveShaperNode</code></a> |
| representing a non-linear distortion.</p> |
| </dd> |
| </dl> |
| <dl> |
| <dt id="dfn-createPanner">The <code>createPanner</code> method</dt> |
| <dd><p>Creates an <a |
| href="#PannerNode-section"><code>PannerNode</code></a>.</p> |
| </dd> |
| </dl> |
| <dl> |
| <dt id="dfn-createConvolver">The <code>createConvolver</code> method</dt> |
| <dd><p>Creates a <a |
| href="#ConvolverNode-section"><code>ConvolverNode</code></a>.</p> |
| </dd> |
| </dl> |
| <dl> |
| <dt id="dfn-createChannelSplitter">The <code>createChannelSplitter</code> |
| method</dt> |
| <dd><p>Creates an <a |
| href="#ChannelSplitterNode-section"><code>ChannelSplitterNode</code></a> |
| representing a channel splitter. An exception will be thrown for invalid parameter values.</p> |
| <p>The <dfn id="dfn-numberOfOutputs">numberOfOutputs</dfn> parameter |
| determines the number of outputs. Values of up to 32 must be supported. If not specified, then 6 will be used. </p> |
| </dd> |
| </dl> |
| <dl> |
| <dt id="dfn-createChannelMerger">The <code>createChannelMerger</code> |
| method</dt> |
| <dd><p>Creates an <a |
| href="#ChannelMergerNode-section"><code>ChannelMergerNode</code></a> |
| representing a channel merger. An exception will be thrown for invalid parameter values.</p> |
| <p>The <dfn id="dfn-numberOfInputs">numberOfInputs</dfn> parameter |
| determines the number of inputs. Values of up to 32 must be supported. If not specified, then 6 will be used. </p> |
| </dd> |
| </dl> |
| <dl> |
| <dt id="dfn-createDynamicsCompressor">The |
| <code>createDynamicsCompressor</code> method</dt> |
| <dd><p>Creates a <a |
| href="#DynamicsCompressorNode-section"><code>DynamicsCompressorNode</code></a>.</p> |
| </dd> |
| </dl> |
| <dl> |
| <dt id="dfn-createOscillator">The |
| <code>createOscillator</code> method</dt> |
| <dd><p>Creates an <a |
| href="#OscillatorNode-section"><code>OscillatorNode</code></a>.</p> |
| </dd> |
| </dl> |
| <dl> |
| <dt id="dfn-createPeriodicWave">The |
| <code>createPeriodicWave</code> method</dt> |
| <dd><p>Creates a <a |
| href="#PeriodicWave-section"><code>PeriodicWave</code></a> representing a waveform containing arbitrary harmonic content. |
| The <code>real</code> and <code>imag</code> parameters must be of type <code>Float32Array</code> of equal |
| lengths greater than zero and less than or equal to 4096 or an exception will be thrown. |
| These parameters specify the Fourier coefficients of a |
| <a href="http://en.wikipedia.org/wiki/Fourier_series">Fourier series</a> representing the partials of a periodic waveform. |
| The created PeriodicWave will be used with an <a href="#OscillatorNode-section"><code>OscillatorNode</code></a> |
| and will represent a <em>normalized</em> time-domain waveform having maximum absolute peak value of 1. |
| Another way of saying this is that the generated waveform of an <a href="#OscillatorNode-section"><code>OscillatorNode</code></a> |
| will have maximum peak value at 0dBFS. Conveniently, this corresponds to the full-range of the signal values used by the Web Audio API. |
| Because the PeriodicWave will be normalized on creation, the <code>real</code> and <code>imag</code> parameters |
| represent <em>relative</em> values. |
| </p> |
| <p>The <dfn id="dfn-real">real</dfn> parameter represents an array of <code>cosine</code> terms (traditionally the A terms). |
| In audio terminology, the first element (index 0) is the DC-offset of the periodic waveform and is usually set to zero. |
| The second element (index 1) represents the fundamental frequency. The third element represents the first overtone, and so on.</p> |
| <p>The <dfn id="dfn-imag">imag</dfn> parameter represents an array of <code>sine</code> terms (traditionally the B terms). |
| The first element (index 0) should be set to zero (and will be ignored) since this term does not exist in the Fourier series. |
| The second element (index 1) represents the fundamental frequency. The third element represents the first overtone, and so on.</p> |
| </dd> |
| </dl> |
| </div> |
| </div> |
| |
| <h3 id="lifetime-AudioContext">4.1.3. Lifetime</h3> |
| <p class="norm">This section is informative.</p> |
| |
| <p> |
| Once created, an <code>AudioContext</code> will continue to play sound until it has no more sound to play, or |
| the page goes away. |
| </p> |
| |
| <div id="OfflineAudioContext-section-section" class="section"> |
| <h2 id="OfflineAudioContext-section">4.1b. The OfflineAudioContext Interface</h2> |
| <p> |
| OfflineAudioContext is a particular type of AudioContext for rendering/mixing-down (potentially) faster than real-time. |
| It does not render to the audio hardware, but instead renders as quickly as possible, calling a completion event handler |
| with the result provided as an AudioBuffer. |
| </p> |
| |
| |
| <p> |
| </p> |
| |
| <div class="block"> |
| |
| <div class="blockTitleDiv"> |
| <span class="blockTitle">Web IDL</span></div> |
| |
| <div class="blockContent"> |
| <pre class="code"><code class="idl-code" id="offline-audio-context-idl"> |
| [Constructor(unsigned long numberOfChannels, unsigned long length, float sampleRate)] |
| interface <dfn id="dfn-OfflineAudioContext">OfflineAudioContext</dfn> : AudioContext { |
| |
| void startRendering(); |
| |
| attribute EventHandler oncomplete; |
| |
| }; |
| </code></pre> |
| </div> |
| </div> |
| |
| |
| <div id="attributes-OfflineAudioContext-section" class="section"> |
| <h3 id="attributes-OfflineAudioContext">4.1b.1. Attributes</h3> |
| <dl> |
| <dt id="dfn-oncomplete"><code>oncomplete</code></dt> |
| <dd><p>An EventHandler of type <a href="#OfflineAudioCompletionEvent-section">OfflineAudioCompletionEvent</a>.</p> |
| </dd> |
| </dl> |
| </div> |
| |
| |
| <div id="methodsandparams-OfflineAudioContext-section" class="section"> |
| <h3 id="methodsandparams-OfflineAudioContext">4.1b.2. Methods and Parameters</h3> |
| <dl> |
| <dt id="dfn-startRendering">The <code>startRendering</code> |
| method</dt> |
| <dd><p>Given the current connections and scheduled changes, starts rendering audio. The |
| <code>oncomplete</code> handler will be called once the rendering has finished. |
| This method must only be called one time or an exception will be thrown.</p> |
| </dd> |
| </dl> |
| </div> |
| |
| |
| <div id="OfflineAudioCompletionEvent-section" class="section"> |
| <h2 id="OfflineAudioCompletionEvent">4.1c. The OfflineAudioCompletionEvent Interface</h2> |
| |
| <p>This is an <code>Event</code> object which is dispatched to <a |
| href="#OfflineAudioContext-section"><code>OfflineAudioContext</code></a>. </p> |
| |
| |
| <div class="block"> |
| |
| <div class="blockTitleDiv"> |
| <span class="blockTitle">Web IDL</span></div> |
| |
| <div class="blockContent"> |
| <pre class="code"><code class="idl-code" id="offline-audio-completion-event-idl"> |
| |
| interface <dfn id="dfn-OfflineAudioCompletionEvent">OfflineAudioCompletionEvent</dfn> : Event { |
| |
| readonly attribute AudioBuffer renderedBuffer; |
| |
| }; |
| </code></pre> |
| </div> |
| </div> |
| |
| <div id="attributes-OfflineAudioCompletionEvent-section" class="section"> |
| <h3 id="attributes-OfflineAudioCompletionEvent">4.1c.1. Attributes</h3> |
| <dl> |
| <dt id="dfn-renderedBuffer"><code>renderedBuffer</code></dt> |
| <dd><p>An AudioBuffer containing the rendered audio data once an OfflineAudioContext has finished rendering. |
| It will have a number of channels equal to the <code>numberOfChannels</code> parameter |
| of the OfflineAudioContext constructor.</p> |
| </dd> |
| </dl> |
| </div> |
| </div> |
| |
| |
| <div id="AudioNode-section-section" class="section"> |
| <h2 id="AudioNode-section">4.2. The AudioNode Interface</h2> |
| |
| <p>AudioNodes are the building blocks of an <a |
| href="#AudioContext-section"><code>AudioContext</code></a>. This interface |
| represents audio sources, the audio destination, and intermediate processing |
| modules. These modules can be connected together to form <a |
| href="#ModularRouting-section">processing graphs</a> for rendering audio to the |
| audio hardware. Each node can have <dfn>inputs</dfn> and/or <dfn>outputs</dfn>. |
| A <dfn>source node</dfn> has no inputs |
| and a single output. An <a |
| href="#AudioDestinationNode-section"><code>AudioDestinationNode</code></a> has |
| one input and no outputs and represents the final destination to the audio |
| hardware. Most processing nodes such as filters will have one input and one |
| output. Each type of <code>AudioNode</code> differs in the details of how it processes or synthesizes audio. But, in general, <code>AudioNodes</code> |
| will process its inputs (if it has any), and generate audio for its outputs (if it has any). |
| </p> |
| |
| <p> |
| Each <dfn>output</dfn> has one or more <dfn>channels</dfn>. The exact number of channels depends on the details of the specific AudioNode. |
| </p> |
| |
| <p> |
| An output may connect to one or more <code>AudioNode</code> inputs, thus <em>fan-out</em> is supported. An input initially has no connections, |
| but may be connected from one |
| or more <code>AudioNode</code> outputs, thus <em>fan-in</em> is supported. When the <code>connect()</code> method is called to connect |
| an output of an AudioNode to an input of an AudioNode, we call that a <dfn>connection</dfn> to the input. |
| </p> |
| |
| <p> |
| Each AudioNode <dfn>input</dfn> has a specific number of channels at any given time. This number can change depending on the <dfn>connection(s)</dfn> |
| made to the input. If the input has no connections then it has one channel which is silent. |
| </p> |
| |
| <p> |
| For each <dfn>input</dfn>, an <code>AudioNode</code> performs a mixing (usually an up-mixing) of all connections to that input. |
| |
| Please see <a href="#MixerGainStructure-section">Mixer Gain Structure</a> for more informative details, and the <a href="#UpMix-section">Channel up-mixing and down-mixing</a> |
| section for normative requirements. |
| |
| </p> |
| |
| <p> |
| For performance reasons, practical implementations will need to use block processing, with each <code>AudioNode</code> processing a |
| fixed number of sample-frames of size <em>block-size</em>. In order to get uniform behavior across implementations, we will define this |
| value explicitly. <em>block-size</em> is defined to be 128 sample-frames which corresponds to roughly 3ms at a sample-rate of 44.1KHz. |
| </p> |
| |
| <p> |
| AudioNodes are <em>EventTarget</em>s, as described in <cite><a href="http://dom.spec.whatwg.org/">DOM</a></cite> |
| <a href="#DOM">[DOM]</a>. This means that it is possible to dispatch events to AudioNodes the same |
| way that other EventTargets accept events. |
| </p> |
| |
| <div class="block"> |
| |
| <div class="blockTitleDiv"> |
| <span class="blockTitle">Web IDL</span></div> |
| |
| <div class="blockContent"> |
| <pre class="code"><code class="idl-code" id="audio-node-idl"> |
| |
| enum <dfn>ChannelCountMode</dfn> { |
| "max", |
| "clamped-max", |
| "explicit" |
| }; |
| |
| enum <dfn>ChannelInterpretation</dfn> { |
| "speakers", |
| "discrete" |
| }; |
| |
| interface <dfn id="dfn-AudioNode">AudioNode</dfn> : EventTarget { |
| |
| void connect(AudioNode destination, optional unsigned long output = 0, optional unsigned long input = 0); |
| void connect(AudioParam destination, optional unsigned long output = 0); |
| void disconnect(optional unsigned long output = 0); |
| |
| readonly attribute AudioContext context; |
| readonly attribute unsigned long numberOfInputs; |
| readonly attribute unsigned long numberOfOutputs; |
| |
| // Channel up-mixing and down-mixing rules for all inputs. |
| attribute unsigned long channelCount; |
| attribute ChannelCountMode channelCountMode; |
| attribute ChannelInterpretation channelInterpretation; |
| |
| }; |
| </code></pre> |
| </div> |
| </div> |
| |
| <div id="attributes-AudioNode-section" class="section"> |
| <h3 id="attributes-AudioNode">4.2.1. Attributes</h3> |
| <dl> |
| <dt id="dfn-context"><code>context</code></dt> |
| <dd><p>The AudioContext which owns this AudioNode.</p> |
| </dd> |
| </dl> |
| <dl> |
| <dt id="dfn-numberOfInputs_2"><code>numberOfInputs</code></dt> |
| <dd><p>The number of inputs feeding into the AudioNode. For <dfn>source nodes</dfn>, |
| this will be 0.</p> |
| </dd> |
| </dl> |
| <dl> |
| <dt id="dfn-numberOfOutputs_2"><code>numberOfOutputs</code></dt> |
| <dd><p>The number of outputs coming out of the AudioNode. This will be 0 |
| for an AudioDestinationNode.</p> |
| </dd> |
| </dl> |
| <dl> |
| <dt id="dfn-channelCount"><code>channelCount</code><dt> |
| <dd><p>The number of channels used when up-mixing and down-mixing connections to any inputs to the node. The default value is 2 |
| except for specific nodes where its value is specially determined. |
| This attribute has no effect for nodes with no inputs. |
| If this value is set to zero, the implementation MUST raise the |
| NOT_SUPPORTED_ERR exception.</p> |
| <p>See the <a href="#UpMix-section">Channel up-mixing and down-mixing</a> |
| section for more information on this attribute.</p> |
| </dd> |
| </dl> |
| <dl> |
| <dt id="dfn-channelCountMode"><code>channelCountMode</code><dt> |
| <dd><p>Determines how channels will be counted when up-mixing and down-mixing connections to any inputs to the node |
| . This attribute has no effect for nodes with no inputs.</p> |
| <p>See the <a href="#UpMix-section">Channel up-mixing and down-mixing</a> |
| section for more information on this attribute.</p> |
| </dd> |
| </dl> |
| <dl> |
| <dt id="dfn-channelInterpretation"><code>channelInterpretation</code><dt> |
| <dd><p>Determines how individual channels will be treated when up-mixing and down-mixing connections to any inputs to the node. |
| This attribute has no effect for nodes with no inputs.</p> |
| <p>See the <a href="#UpMix-section">Channel up-mixing and down-mixing</a> |
| section for more information on this attribute.</p> |
| </dd> |
| </dl> |
| </div> |
| |
| <div id="methodsandparams-AudioNode-section" class="section"> |
| <h3 id="methodsandparams-AudioNode">4.2.2. Methods and Parameters</h3> |
| <dl> |
| <dt id="dfn-connect-AudioNode">The <code>connect</code> to AudioNode method</dt> |
| <dd><p>Connects the AudioNode to another AudioNode.</p> |
| <p>The <dfn id="dfn-destination_2">destination</dfn> parameter is the |
| AudioNode to connect to.</p> |
| <p>The <dfn id="dfn-output_2">output</dfn> parameter is an index |
| describing which output of the AudioNode from which to connect. An |
| out-of-bound value throws an exception.</p> |
| <p>The <dfn id="dfn-input_2">input</dfn> parameter is an index describing |
| which input of the destination AudioNode to connect to. An out-of-bound |
| value throws an exception. </p> |
| <p>It is possible to connect an AudioNode output to more than one input |
| with multiple calls to connect(). Thus, "fan-out" is supported. </p> |
| <p> |
| It is possible to connect an AudioNode to another AudioNode which creates a <em>cycle</em>. |
| In other words, an AudioNode may connect to another AudioNode, which in turn connects back |
| to the first AudioNode. This is allowed only if there is at least one |
| <a class="dfnref" href="#DelayNode-section">DelayNode</a> in the <em>cycle</em> or an exception will |
| be thrown. |
| </p> |
| |
| <p> |
| There can only be one connection between a given output of one specific node and a given input of another specific node. |
| Multiple connections with the same termini are ignored. For example: |
| </p> |
| |
| <pre> |
| nodeA.connect(nodeB); |
| nodeA.connect(nodeB); |
| |
| will have the same effect as |
| |
| nodeA.connect(nodeB); |
| </pre> |
| |
| </dd> |
| </dl> |
| <dl> |
| <dt id="dfn-connect-AudioParam">The <code>connect</code> to AudioParam method</dt> |
| <dd><p>Connects the AudioNode to an AudioParam, controlling the parameter |
| value with an audio-rate signal. |
| </p> |
| |
| <p>The <dfn id="dfn-destination_3">destination</dfn> parameter is the |
| AudioParam to connect to.</p> |
| <p>The <dfn id="dfn-output_3-destination">output</dfn> parameter is an index |
| describing which output of the AudioNode from which to connect. An |
| out-of-bound value throws an exception.</p> |
| |
| <p>It is possible to connect an AudioNode output to more than one AudioParam |
| with multiple calls to connect(). Thus, "fan-out" is supported. </p> |
| <p>It is possible to connect more than one AudioNode output to a single AudioParam |
| with multiple calls to connect(). Thus, "fan-in" is supported. </p> |
| <p>An AudioParam will take the rendered audio data from any AudioNode output connected to it and <a href="#down-mix">convert it to mono</a> by down-mixing if it is not |
| already mono, then mix it together with other such outputs and finally will mix with the <em>intrinsic</em> |
| parameter value (the value the AudioParam would normally have without any audio connections), including any timeline changes |
| scheduled for the parameter. </p> |
| |
| <p> |
| There can only be one connection between a given output of one specific node and a specific AudioParam. |
| Multiple connections with the same termini are ignored. For example: |
| </p> |
| |
| <pre> |
| nodeA.connect(param); |
| nodeA.connect(param); |
| |
| will have the same effect as |
| |
| nodeA.connect(param); |
| </pre> |
| |
| </dd> |
| </dl> |
| <dl> |
| <dt id="dfn-disconnect">The <code>disconnect</code> method</dt> |
| <dd><p>Disconnects an AudioNode's output.</p> |
| <p>The <dfn id="dfn-output_3-disconnect">output</dfn> parameter is an index |
| describing which output of the AudioNode to disconnect. An out-of-bound |
| value throws an exception.</p> |
| </dd> |
| </dl> |
| </div> |
| </div> |
| |
| <h3 id="lifetime-AudioNode">4.2.3. Lifetime</h3> |
| |
| <p class="norm">This section is informative.</p> |
| |
| <p>An implementation may choose any method to avoid unnecessary resource usage and unbounded memory growth of unused/finished |
| nodes. The following is a description to help guide the general expectation of how node lifetime would be managed. |
| </p> |
| |
| <p> |
| An <code>AudioNode</code> will live as long as there are any references to it. There are several types of references: |
| </p> |
| |
| <ol> |
| <li>A <em>normal</em> JavaScript reference obeying normal garbage collection rules. </li> |
| <li>A <em>playing</em> reference for both <code>AudioBufferSourceNodes</code> and <code>OscillatorNodes</code>. |
| These nodes maintain a <em>playing</em> |
| reference to themselves while they are currently playing.</li> |
| <li>A <em>connection</em> reference which occurs if another <code>AudioNode</code> is connected to it. </li> |
| <li>A <em>tail-time</em> reference which an <code>AudioNode</code> maintains on itself as long as it has |
| any internal processing state which has not yet been emitted. For example, a <code>ConvolverNode</code> has |
| a tail which continues to play even after receiving silent input (think about clapping your hands in a large concert |
| hall and continuing to hear the sound reverberate throughout the hall). Some <code>AudioNodes</code> have this |
| property. Please see details for specific nodes.</li> |
| </ol> |
| |
| <p> |
| Any <code>AudioNodes</code> which are connected in a cycle <em>and</em> are directly or indirectly connected to the |
| <code>AudioDestinationNode</code> of the <code>AudioContext</code> will stay alive as long as the <code>AudioContext</code> is alive. |
| </p> |
| |
| <p> |
| When an <code>AudioNode</code> has no references it will be deleted. But before it is deleted, it will disconnect itself |
| from any other <code>AudioNodes</code> which it is connected to. In this way it releases all connection references (3) it has to other nodes. |
| </p> |
| |
| <p> |
| Regardless of any of the above references, it can be assumed that the <code>AudioNode</code> will be deleted when its <code>AudioContext</code> is deleted. |
| </p> |
| |
| |
| <div id="AudioDestinationNode-section" class="section"> |
| <h2 id="AudioDestinationNode">4.4. The AudioDestinationNode Interface</h2> |
| |
| <p>This is an <a href="#AudioNode-section"><code>AudioNode</code></a> |
| representing the final audio destination and is what the user will ultimately |
| hear. It can often be considered as an audio output device which is connected to |
| speakers. All rendered audio to be heard will be routed to this node, a |
| "terminal" node in the AudioContext's routing graph. There is only a single |
| AudioDestinationNode per AudioContext, provided through the |
| <code>destination</code> attribute of <a |
| href="#AudioContext-section"><code>AudioContext</code></a>. </p> |
| <pre> |
| numberOfInputs : 1 |
| numberOfOutputs : 0 |
| |
| channelCount = 2; |
| channelCountMode = "explicit"; |
| channelInterpretation = "speakers"; |
| </pre> |
| |
| <div class="block"> |
| |
| <div class="blockTitleDiv"> |
| <span class="blockTitle">Web IDL</span></div> |
| |
| <div class="blockContent"> |
| <pre class="code"><code class="idl-code" id="audio-destination-node-idl"> |
| |
| interface <dfn id="dfn-AudioDestinationNode">AudioDestinationNode</dfn> : AudioNode { |
| |
| readonly attribute unsigned long maxChannelCount; |
| |
| }; |
| </code></pre> |
| </div> |
| </div> |
| |
| <div id="attributes-AudioDestinationNode-section" class="section"> |
| <h3 id="attributes-AudioDestinationNode">4.4.1. Attributes</h3> |
| <dl> |
| <dt id="dfn-maxChannelCount"><code>maxChannelCount</code></dt> |
| <dd><p>The maximum number of channels that the <code>channelCount</code> attribute can be set to. |
| An <code>AudioDestinationNode</code> representing the audio hardware end-point (the normal case) can potentially output more than |
| 2 channels of audio if the audio hardware is multi-channel. <code>maxChannelCount</code> is the maximum number of channels that |
| this hardware is capable of supporting. If this value is 0, then this indicates that <code>channelCount</code> may not be |
| changed. This will be the case for an <code>AudioDestinationNode</code> in an <code>OfflineAudioContext</code> and also for |
| basic implementations with hardware support for stereo output only.</p> |
| |
| <p><code>channelCount</code> defaults to 2 for a destination in a normal AudioContext, and may be set to any non-zero value less than or equal |
| to <code>maxChannelCount</code>. An exception will be thrown if this value is not within the valid range. Giving a concrete example, if |
| the audio hardware supports 8-channel output, then we may set <code>numberOfChannels</code> to 8, and render 8-channels of output. |
| </p> |
| |
| <p> |
| For an AudioDestinationNode in an OfflineAudioContext, the <code>channelCount</code> is determined when the offline context is created and this value |
| may not be changed. |
| </p> |
| |
| </dd> |
| </dl> |
| |
| </div> |
| </div> |
| |
| <div id="AudioParam-section" class="section"> |
| <h2 id="AudioParam">4.5. The AudioParam Interface</h2> |
| |
| <p>AudioParam controls an individual aspect of an <a |
| href="#AudioNode-section"><code>AudioNode</code></a>'s functioning, such as |
| volume. The parameter can be set immediately to a particular value using the |
| "value" attribute. Or, value changes can be scheduled to happen at |
| very precise times (in the coordinate system of AudioContext.currentTime), for envelopes, volume fades, LFOs, filter sweeps, grain |
| windows, etc. In this way, arbitrary timeline-based automation curves can be |
| set on any AudioParam. Additionally, audio signals from the outputs of <code>AudioNodes</code> can be connected |
| to an <code>AudioParam</code>, summing with the <em>intrinsic</em> parameter value. |
| </p> |
| |
| <p> |
| Some synthesis and processing <code>AudioNodes</code> have <code>AudioParams</code> as attributes whose values must |
| be taken into account on a per-audio-sample basis. |
| For other <code>AudioParams</code>, sample-accuracy is not important and the value changes can be sampled more coarsely. |
| Each individual <code>AudioParam</code> will specify that it is either an <em>a-rate</em> parameter |
| which means that its values must be taken into account on a per-audio-sample basis, or it is a <em>k-rate</em> parameter. |
| </p> |
| |
| <p> |
| Implementations must use block processing, with each <code>AudioNode</code> |
| processing 128 sample-frames in each block. |
| </p> |
| |
| <p> |
| For each 128 sample-frame block, the value of a <em>k-rate</em> parameter must |
| be sampled at the time of the very first sample-frame, and that value must be |
| used for the entire block. <em>a-rate</em> parameters must be sampled for each |
| sample-frame of the block. |
| </p> |
| |
| |
| <div class="block"> |
| |
| <div class="blockTitleDiv"> |
| <span class="blockTitle">Web IDL</span></div> |
| |
| <div class="blockContent"> |
| <pre class="code"><code class="idl-code" id="audio-param-idl"> |
| |
| interface <dfn id="dfn-AudioParam">AudioParam</dfn> { |
| |
| attribute float value; |
| readonly attribute float defaultValue; |
| |
| <span class="comment">// Parameter automation. </span> |
| void setValueAtTime(float value, double startTime); |
| void linearRampToValueAtTime(float value, double endTime); |
| void exponentialRampToValueAtTime(float value, double endTime); |
| |
| <span class="comment">// Exponentially approach the target value with a rate having the given time constant. </span> |
| void setTargetAtTime(float target, double startTime, double timeConstant); |
| |
| <span class="comment">// Sets an array of arbitrary parameter values starting at time for the given duration. </span> |
| <span class="comment">// The number of values will be scaled to fit into the desired duration. </span> |
| void setValueCurveAtTime(Float32Array values, double startTime, double duration); |
| |
| <span class="comment">// Cancels all scheduled parameter changes with times greater than or equal to startTime. </span> |
| void cancelScheduledValues(double startTime); |
| |
| }; |
| </code></pre> |
| </div> |
| </div> |
| |
| |
| |
| <div id="attributes-AudioParam-section" class="section"> |
| <h3 id="attributes-AudioParam">4.5.1. Attributes</h3> |
| |
| <dl> |
| <dt id="dfn-value"><code>value</code></dt> |
| <dd><p>The parameter's floating-point value. This attribute is initialized to the |
| <code>defaultValue</code>. If a value is set during a time when there are any automation events scheduled then |
| it will be ignored and no exception will be thrown.</p> |
| </dd> |
| </dl> |
| <dl> |
| <dt id="dfn-defaultValue"><code>defaultValue</code></dt> |
| <dd><p>Initial value for the value attribute</p> |
| </dd> |
| </dl> |
| </div> |
| |
| <div id="methodsandparams-AudioParam-section" class="section"> |
| <h3 id="methodsandparams-AudioParam">4.5.2. Methods and Parameters</h3> |
| |
| <p> |
| An <code>AudioParam</code> maintains a time-ordered event list which is initially empty. The times are in |
| the time coordinate system of AudioContext.currentTime. The events define a mapping from time to value. The following methods |
| can change the event list by adding a new event into the list of a type specific to the method. Each event |
| has a time associated with it, and the events will always be kept in time-order in the list. These |
| methods will be called <em>automation</em> methods:</p> |
| |
| <ul> |
| <li>setValueAtTime() - <em>SetValue</em></li> |
| <li>linearRampToValueAtTime() - <em>LinearRampToValue</em></li> |
| <li>exponentialRampToValueAtTime() - <em>ExponentialRampToValue</em></li> |
| <li>setTargetAtTime() - <em>SetTarget</em></li> |
| <li>setValueCurveAtTime() - <em>SetValueCurve</em></li> |
| </ul> |
| |
| <p> |
| The following rules will apply when calling these methods: |
| </p> |
| <ul> |
| <li>If one of these events is added at a time where there is already an event of the exact same type, then the new event will replace the old |
| one.</li> |
| <li>If one of these events is added at a time where there is already one or more events of a different type, then it will be |
| placed in the list after them, but before events whose times are after the event. </li> |
| <li>If setValueCurveAtTime() is called for time T and duration D and there are any events having a time greater than T, but less than |
| T + D, then an exception will be thrown. In other words, it's not ok to schedule a value curve during a time period containing other events.</li> |
| <li>Similarly an exception will be thrown if any <em>automation</em> method is called at a time which is inside of the time interval |
| of a <em>SetValueCurve</em> event at time T and duration D.</li> |
| </ul> |
| <p> |
| </p> |
| |
| <dl> |
| <dt id="dfn-setValueAtTime">The <code>setValueAtTime</code> method</dt> |
| <dd><p>Schedules a parameter value change at the given time.</p> |
| <p>The <dfn id="dfn-value_2">value</dfn> parameter is the value the |
| parameter will change to at the given time.</p> |
| <p>The <dfn id="dfn-startTime_2">startTime</dfn> parameter is the time in the same time coordinate system as AudioContext.currentTime.</p> |
| <p> |
| If there are no more events after this <em>SetValue</em> event, then for t >= startTime, v(t) = value. In other words, the value will remain constant. |
| </p> |
| <p> |
| If the next event (having time T1) after this <em>SetValue</em> event is not of type <em>LinearRampToValue</em> or <em>ExponentialRampToValue</em>, |
| then, for t: startTime <= t < T1, v(t) = value. |
| In other words, the value will remain constant during this time interval, allowing the creation of "step" functions. |
| </p> |
| <p> |
| If the next event after this <em>SetValue</em> event is of type <em>LinearRampToValue</em> or <em>ExponentialRampToValue</em> then please |
| see details below. |
| </p> |
| </dd> |
| </dl> |
| <dl> |
| <dt id="dfn-linearRampToValueAtTime">The <code>linearRampToValueAtTime</code> |
| method</dt> |
| <dd><p>Schedules a linear continuous change in parameter value from the |
| previous scheduled parameter value to the given value.</p> |
| <p>The <dfn id="dfn-value_3">value</dfn> parameter is the value the |
| parameter will linearly ramp to at the given time.</p> |
| <p>The <dfn id="dfn-endTime_3">endTime</dfn> parameter is the time in the same time coordinate system as AudioContext.currentTime.</p> |
| |
| <p> |
| The value during the time interval T0 <= t < T1 (where T0 is the time of the previous event and T1 is the endTime parameter passed into this method) |
| will be calculated as: |
| </p> |
| <pre> |
| v(t) = V0 + (V1 - V0) * ((t - T0) / (T1 - T0)) |
| </pre> |
| <p> |
| Where V0 is the value at the time T0 and V1 is the value parameter passed into this method. |
| </p> |
| <p> |
| If there are no more events after this LinearRampToValue event then for t >= T1, v(t) = V1 |
| </p> |
| |
| </dd> |
| </dl> |
| <dl> |
| <dt id="dfn-exponentialRampToValueAtTime">The |
| <code>exponentialRampToValueAtTime</code> method</dt> |
| <dd><p>Schedules an exponential continuous change in parameter value from |
| the previous scheduled parameter value to the given value. Parameters |
| representing filter frequencies and playback rate are best changed |
| exponentially because of the way humans perceive sound. </p> |
| <p>The <dfn id="dfn-value_4">value</dfn> parameter is the value the |
| parameter will exponentially ramp to at the given time. An exception will be thrown if this value is less than |
| or equal to 0, or if the value at the time of the previous event is less than or equal to 0.</p> |
| <p>The <dfn id="dfn-endTime_4">endTime</dfn> parameter is the time in the same time coordinate system as AudioContext.currentTime.</p> |
| <p> |
| The value during the time interval T0 <= t < T1 (where T0 is the time of the previous event and T1 is the endTime parameter passed into this method) |
| will be calculated as: |
| </p> |
| <pre> |
| v(t) = V0 * (V1 / V0) ^ ((t - T0) / (T1 - T0)) |
| </pre> |
| <p> |
| Where V0 is the value at the time T0 and V1 is the value parameter passed into this method. |
| </p> |
| <p> |
| If there are no more events after this ExponentialRampToValue event then for t >= T1, v(t) = V1 |
| </p> |
| </dd> |
| </dl> |
| <dl> |
| <dt id="dfn-setTargetAtTime">The <code>setTargetAtTime</code> |
| method</dt> |
| <dd><p>Start exponentially approaching the target value at the given time |
| with a rate having the given time constant. Among other uses, this is |
| useful for implementing the "decay" and "release" portions of an ADSR |
| envelope. Please note that the parameter value does not immediately |
| change to the target value at the given time, but instead gradually |
| changes to the target value.</p> |
| <p>The <dfn id="dfn-target">target</dfn> parameter is the value |
| the parameter will <em>start</em> changing to at the given time.</p> |
| <p>The <dfn id="dfn-startTime">startTime</dfn> parameter is the time in the same time coordinate system as AudioContext.currentTime.</p> |
| <p>The <dfn id="dfn-timeConstant">timeConstant</dfn> parameter is the |
| time-constant value of first-order filter (exponential) approach to the |
| target value. The larger this value is, the slower the transition will |
| be.</p> |
| <p> |
| More precisely, <em>timeConstant</em> is the time it takes a first-order linear continuous time-invariant system |
| to reach the value 1 - 1/e (around 63.2%) given a step input response (transition from 0 to 1 value). |
| </p> |
| <p> |
| During the time interval: <em>T0</em> <= t < <em>T1</em>, where T0 is the <em>startTime</em> parameter and T1 represents the time of the event following this |
| event (or <em>infinity</em> if there are no following events): |
| </p> |
| <pre> |
| v(t) = V1 + (V0 - V1) * exp(-(t - T0) / <em>timeConstant</em>) |
| </pre> |
| <p> |
| Where V0 is the initial value (the .value attribute) at T0 (the <em>startTime</em> parameter) and V1 is equal to the <em>target</em> |
| parameter. |
| </p> |
| </dd> |
| </dl> |
| <dl> |
| <dt id="dfn-setValueCurveAtTime">The <code>setValueCurveAtTime</code> |
| method</dt> |
| <dd><p>Sets an array of arbitrary parameter values starting at the given |
| time for the given duration. The number of values will be scaled to fit |
| into the desired duration. </p> |
| <p>The <dfn id="dfn-values">values</dfn> parameter is a Float32Array |
| representing a parameter value curve. These values will apply starting at |
| the given time and lasting for the given duration. </p> |
| <p>The <dfn id="dfn-startTime_5">startTime</dfn> parameter is the time in the same time coordinate system as AudioContext.currentTime.</p> |
| <p>The <dfn id="dfn-duration_5">duration</dfn> parameter is the |
| amount of time in seconds (after the <em>time</em> parameter) where values will be calculated according to the <em>values</em> parameter..</p> |
| <p> |
| During the time interval: <em>startTime</em> <= t < <em>startTime</em> + <em>duration</em>, values will be calculated: |
| </p> |
| <pre> |
| v(t) = values[N * (t - startTime) / duration], where <em>N</em> is the length of the <em>values</em> array. |
| </pre> |
| <p> |
| After the end of the curve time interval (t >= <em>startTime</em> + <em>duration</em>), the value will remain constant at the final curve value, |
| until there is another automation event (if any). |
| </p> |
| </dd> |
| </dl> |
| <dl> |
| <dt id="dfn-cancelScheduledValues">The <code>cancelScheduledValues</code> |
| method</dt> |
| <dd><p>Cancels all scheduled parameter changes with times greater than or |
| equal to startTime.</p> |
| <p>The <dfn>startTime</dfn> parameter is the starting |
| time at and after which any previously scheduled parameter changes will |
| be cancelled. It is a time in the same time coordinate system as AudioContext.currentTime.</p> |
| </dd> |
| </dl> |
| </div> |
| </div> |
| |
| |
| |
| <div id="computedValue-AudioParam-section" class="section"> |
| <h3>4.5.3. Computation of Value</h3> |
| |
| <p> |
| <dfn>computedValue</dfn> is the final value controlling the audio DSP and is computed by the audio rendering thread during each rendering time quantum. |
| It must be internally computed as follows: |
| </p> |
| |
| <ol> |
| <li>An <em>intrinsic</em> parameter value will be calculated at each time, which is either the value set directly to the .value attribute, |
| or, if there are any scheduled parameter changes (automation events) with times before or at this time, |
| the value as calculated from these events. If the .value attribute |
| is set after any automation events have been scheduled, then these events will be removed. When read, the .value attribute |
| always returns the <em>intrinsic</em> value for the current time. If automation events are removed from a given time range, then the |
| <em>intrinsic</em> value will remain unchanged and stay at its previous value until either the .value attribute is directly set, or automation events are added |
| for the time range. |
| </li> |
| |
| <li> |
| An AudioParam will take the rendered audio data from any AudioNode output connected to it and <a href="#down-mix">convert it to mono</a> by down-mixing if it is not |
| already mono, then mix it together with other such outputs. If there are no AudioNodes connected to it, then this value is 0, having no |
| effect on the <em>computedValue</em>. |
| </li> |
| |
| <li> |
| The <em>computedValue</em> is the sum of the <em>intrinsic</em> value and the value calculated from (2). |
| </li> |
| |
| </ol> |
| |
| </div> |
| |
| |
| <div id="example1-AudioParam-section" class="section"> |
| <h3 id="example1-AudioParam">4.5.4. AudioParam Automation Example</h3> |
| |
| |
| |
| <div class="example"> |
| |
| <div class="exampleHeader"> |
| Example</div> |
| <img alt="AudioParam automation" src="images/audioparam-automation1.png" /> |
| |
| <div class="block"> |
| |
| <div class="blockTitleDiv"> |
| <span class="blockTitle">ECMAScript</span></div> |
| |
| <div class="blockContent"> |
| <pre class="code"><code class="es-code"> |
| var t0 = 0; |
| var t1 = 0.1; |
| var t2 = 0.2; |
| var t3 = 0.3; |
| var t4 = 0.4; |
| var t5 = 0.6; |
| var t6 = 0.7; |
| var t7 = 1.0; |
| |
| var curveLength = 44100; |
| var curve = new Float32Array(curveLength); |
| for (var i = 0; i < curveLength; ++i) |
| curve[i] = Math.sin(Math.PI * i / curveLength); |
| |
| param.setValueAtTime(0.2, t0); |
| param.setValueAtTime(0.3, t1); |
| param.setValueAtTime(0.4, t2); |
| param.linearRampToValueAtTime(1, t3); |
| param.linearRampToValueAtTime(0.15, t4); |
| param.exponentialRampToValueAtTime(0.75, t5); |
| param.exponentialRampToValueAtTime(0.05, t6); |
| param.setValueCurveAtTime(curve, t6, t7 - t6); |
| </code></pre> |
| </div> |
| </div> |
| </div> |
| </div> |
| |
| <div id="GainNode-section" class="section"> |
| <h2 id="GainNode">4.7. The GainNode Interface</h2> |
| |
| <p>Changing the gain of an audio signal is a fundamental operation in audio |
| applications. The <code>GainNode</code> is one of the building blocks for creating <a |
| href="#MixerGainStructure-section">mixers</a>. |
| This interface is an AudioNode with a single input and single |
| output: </p> |
| <pre> |
| numberOfInputs : 1 |
| numberOfOutputs : 1 |
| |
| channelCountMode = "max"; |
| channelInterpretation = "speakers"; |
| </pre> |
| |
| <p>It multiplies the input audio signal by the (possibly time-varying) <code>gain</code> attribute, copying the result to the output. |
| By default, it will take the input and pass it through to the output unchanged, which represents a constant gain change |
| of 1. |
| </p> |
| |
| <p> |
| As with other <code>AudioParams</code>, the <code>gain</code> parameter represents a mapping from time |
| (in the coordinate system of AudioContext.currentTime) to floating-point value. |
| |
| Every PCM audio sample in the input is multiplied by the <code>gain</code> parameter's value for the specific time |
| corresponding to that audio sample. This multiplied value represents the PCM audio sample for the output. |
| </p> |
| |
| <p> |
| The number of channels of the output will always equal the number of channels of the input, with each channel |
| of the input being multiplied by the <code>gain</code> values and being copied into the corresponding channel |
| of the output. |
| </p> |
| |
| <p> |
| The implementation must make |
| gain changes to the audio stream smoothly, without introducing noticeable |
| clicks or glitches. This process is called "de-zippering". </p> |
| |
| <div class="block"> |
| |
| <div class="blockTitleDiv"> |
| <span class="blockTitle">Web IDL</span></div> |
| |
| <div class="blockContent"> |
| <pre class="code"><code class="idl-code" id="gain-node-idl"> |
| |
| interface <dfn id="dfn-GainNode">GainNode</dfn> : AudioNode { |
| |
| readonly attribute AudioParam gain; |
| |
| }; |
| </code></pre> |
| </div> |
| </div> |
| |
| <div id="attributes-GainNode-section" class="section"> |
| <h3 id="attributes-GainNode">4.7.1. Attributes</h3> |
| <dl> |
| <dt id="dfn-gain"><code>gain</code></dt> |
| <dd><p>Represents the amount of gain to apply. Its |
| default <code>value</code> is 1 (no gain change). The nominal <code>minValue</code> is 0, but may be |
| set negative for phase inversion. The nominal <code>maxValue</code> is 1, but higher values are allowed (no |
| exception thrown).This parameter is <em>a-rate</em> </p> |
| </dd> |
| </dl> |
| </div> |
| </div> |
| |
| <div id="DelayNode-section" class="section"> |
| <h2 id="DelayNode">4.8. The DelayNode Interface</h2> |
| |
| <p>A delay-line is a fundamental building block in audio applications. This |
| interface is an AudioNode with a single input and single output: </p> |
| <pre> |
| numberOfInputs : 1 |
| numberOfOutputs : 1 |
| |
| channelCountMode = "max"; |
| channelInterpretation = "speakers"; |
| </pre> |
| |
| <p> |
| The number of channels of the output always equals the number of channels of the input. |
| </p> |
| |
| <p>It delays the incoming audio signal by a certain amount. The default |
| amount is 0 seconds (no delay). When the delay time is changed, the |
| implementation must make the transition smoothly, without introducing |
| noticeable clicks or glitches to the audio stream. </p> |
| |
| <div class="block"> |
| |
| <div class="blockTitleDiv"> |
| <span class="blockTitle">Web IDL</span></div> |
| |
| <div class="blockContent"> |
| <pre class="code"><code class="idl-code" id="delay-node-idl"> |
| |
| interface <dfn id="dfn-DelayNode">DelayNode</dfn> : AudioNode { |
| |
| readonly attribute AudioParam delayTime; |
| |
| }; |
| </code></pre> |
| </div> |
| </div> |
| |
| <div id="attributes-GainNode-section_2" class="section"> |
| <h3 id="attributes-GainNode_2">4.8.1. Attributes</h3> |
| <dl> |
| <dt id="dfn-delayTime_2"><code>delayTime</code></dt> |
| <dd><p>An AudioParam object representing the amount of delay (in seconds) |
| to apply. The default value (<code>delayTime.value</code>) is 0 (no |
| delay). The minimum value is 0 and the maximum value is determined by the <em>maxDelayTime</em> |
| argument to the <code>AudioContext</code> method <code>createDelay</code>. This parameter is <em>a-rate</em></p> |
| </dd> |
| </dl> |
| </div> |
| </div> |
| |
| <div id="AudioBuffer-section" class="section"> |
| <h2 id="AudioBuffer">4.9. The AudioBuffer Interface</h2> |
| |
| <p>This interface represents a memory-resident audio asset (for one-shot sounds |
| and other short audio clips). Its format is non-interleaved IEEE 32-bit linear PCM with a |
| nominal range of -1 -> +1. It can contain one or more channels. Typically, it would be expected that the length |
| of the PCM data would be fairly short (usually somewhat less than a minute). |
| For longer sounds, such as music soundtracks, streaming should be used with the |
| <code>audio</code> element and <code>MediaElementAudioSourceNode</code>. </p> |
| |
| <p> |
| An AudioBuffer may be used by one or more AudioContexts. |
| </p> |
| |
| <div class="block"> |
| |
| <div class="blockTitleDiv"> |
| <span class="blockTitle">Web IDL</span></div> |
| |
| <div class="blockContent"> |
| <pre class="code"><code class="idl-code" id="audio-buffer-idl"> |
| |
| interface <dfn id="dfn-AudioBuffer">AudioBuffer</dfn> { |
| |
| readonly attribute float sampleRate; |
| readonly attribute long length; |
| |
| <span class="comment">// in seconds </span> |
| readonly attribute double duration; |
| |
| readonly attribute long numberOfChannels; |
| |
| Float32Array getChannelData(unsigned long channel); |
| |
| }; |
| </code></pre> |
| </div> |
| </div> |
| |
| <div id="attributes-AudioBuffer-section" class="section"> |
| <h3 id="attributes-AudioBuffer">4.9.1. Attributes</h3> |
| <dl> |
| <dt id="dfn-sampleRate_AudioBuffer"><code>sampleRate</code></dt> |
| <dd><p>The sample-rate for the PCM audio data in samples per second.</p> |
| </dd> |
| </dl> |
| <dl> |
| <dt id="dfn-length_AudioBuffer"><code>length</code></dt> |
| <dd><p>Length of the PCM audio data in sample-frames.</p> |
| </dd> |
| </dl> |
| <dl> |
| <dt id="dfn-duration_AudioBuffer"><code>duration</code></dt> |
| <dd><p>Duration of the PCM audio data in seconds.</p> |
| </dd> |
| </dl> |
| <dl> |
| <dt id="dfn-numberOfChannels_AudioBuffer"><code>numberOfChannels</code></dt> |
| <dd><p>The number of discrete audio channels.</p> |
| </dd> |
| </dl> |
| </div> |
| |
| <div id="methodsandparams-AudioBuffer-section" class="section"> |
| <h3 id="methodsandparams-AudioBuffer">4.9.2. Methods and Parameters</h3> |
| <dl> |
| <dt id="dfn-getChannelData">The <code>getChannelData</code> method</dt> |
| <dd><p>Returns the <code>Float32Array</code> representing the PCM audio data for the specific channel.</p> |
| <p>The <dfn id="dfn-channel">channel</dfn> parameter is an index |
| representing the particular channel to get data for. An index value of 0 represents |
| the first channel. This index value MUST be less than <code>numberOfChannels</code> |
| or an exception will be thrown.</p> |
| </dd> |
| </dl> |
| </div> |
| </div> |
| |
| <div id="AudioBufferSourceNode-section" class="section"> |
| <h2 id="AudioBufferSourceNode">4.10. The AudioBufferSourceNode Interface</h2> |
| |
| <p>This interface represents an audio source from an in-memory audio asset in |
| an <code>AudioBuffer</code>. It is useful for playing short audio assets |
| which require a high degree of scheduling flexibility (can playback in |
| rhythmically perfect ways). The start() method is used to schedule when |
| sound playback will happen. The playback will stop automatically when |
| the buffer's audio data has been completely |
| played (if the <code>loop</code> attribute is false), or when the stop() |
| method has been called and the specified time has been reached. Please see more |
| details in the start() and stop() description. start() and stop() may not be issued |
| multiple times for a given |
| AudioBufferSourceNode. </p> |
| <pre> numberOfInputs : 0 |
| numberOfOutputs : 1 |
| </pre> |
| |
| <p> |
| The number of channels of the output always equals the number of channels of the AudioBuffer |
| assigned to the .buffer attribute, or is one channel of silence if .buffer is NULL. |
| </p> |
| |
| <div class="block"> |
| |
| <div class="blockTitleDiv"> |
| <span class="blockTitle">Web IDL</span></div> |
| |
| <div class="blockContent"> |
| <pre class="code"><code class="idl-code" id="audio-buffer-source-node-idl"> |
| |
| interface <dfn id="dfn-AudioBufferSourceNode">AudioBufferSourceNode</dfn> : AudioNode { |
| |
| attribute AudioBuffer? buffer; |
| |
| readonly attribute AudioParam playbackRate; |
| |
| attribute boolean loop; |
| attribute double loopStart; |
| attribute double loopEnd; |
| |
| void start(optional double when = 0, optional double offset = 0, optional double duration); |
| void stop(optional double when = 0); |
| |
| attribute EventHandler onended; |
| |
| }; |
| </code></pre> |
| </div> |
| </div> |
| |
| <div id="attributes-AudioBufferSourceNode-section" class="section"> |
| <h3 id="attributes-AudioBufferSourceNode">4.10.1. Attributes</h3> |
| <dl> |
| <dt id="dfn-buffer_AudioBufferSourceNode"><code>buffer</code></dt> |
| <dd><p>Represents the audio asset to be played. </p> |
| </dd> |
| </dl> |
| <dl> |
| <dt id="dfn-playbackRate_AudioBufferSourceNode"><code>playbackRate</code></dt> |
| <dd><p>The speed at which to render the audio stream. The default |
| playbackRate.value is 1. This parameter is <em>a-rate</em> </p> |
| </dd> |
| </dl> |
| <dl> |
| <dt id="dfn-loop_AudioBufferSourceNode"><code>loop</code></dt> |
| <dd><p>Indicates if the audio data should play in a loop. The default value is false. </p> |
| </dd> |
| </dl> |
| |
| <dl> |
| <dt id="dfn-loopStart_AudioBufferSourceNode"><code>loopStart</code></dt> |
| <dd><p>An optional value in seconds where looping should begin if the <code>loop</code> attribute is true. |
| Its default value is 0, and it may usefully be set to any value between 0 and the duration of the buffer.</p> |
| </dd> |
| </dl> |
| <dl> |
| <dt id="dfn-loopEnd_AudioBufferSourceNode"><code>loopEnd</code></dt> |
| <dd><p>An optional value in seconds where looping should end if the <code>loop</code> attribute is true. |
| Its default value is 0, and it may usefully be set to any value between 0 and the duration of the buffer.</p> |
| </dd> |
| </dl> |
| <dl> |
| <dt id="dfn-onended_AudioBufferSourceNode"><code>onended</code></dt> |
| <dd><p>A property used to set the <code>EventHandler</code> (described in <cite><a |
| href="http://www.whatwg.org/specs/web-apps/current-work/#eventhandler">HTML</a></cite>) |
| for the ended event that is dispatched to <a |
| href="#AudioBufferSourceNode-section"><code>AudioBufferSourceNode</code></a> |
| node types. When the playback of the buffer for an <code>AudioBufferSourceNode</code> |
| is finished, an event of type <code>Event</code> (described in <cite><a |
| href="http://www.whatwg.org/specs/web-apps/current-work/#event">HTML</a></cite>) |
| will be dispatched to the event handler. </p> |
| </dd> |
| </dl> |
| |
| |
| </div> |
| </div> |
| |
| <div id="methodsandparams-AudioBufferSourceNode-section" class="section"> |
| <h3 id="methodsandparams-AudioBufferSourceNode">4.10.2. Methods and |
| Parameters</h3> |
| <dl> |
| <dt id="dfn-start">The <code>start</code> method</dt> |
| <dd><p>Schedules a sound to playback at an exact time.</p> |
| <p>The <dfn id="dfn-when">when</dfn> parameter describes at what time (in |
| seconds) the sound should start playing. It is in the same |
| time coordinate system as AudioContext.currentTime. If 0 is passed in for |
| this value or if the value is less than <b>currentTime</b>, then the |
| sound will start playing immediately. <code>start</code> may only be called one time |
| and must be called before <code>stop</code> is called or an exception will be thrown.</p> |
| <p>The <dfn id="dfn-offset">offset</dfn> parameter describes |
| the offset time in the buffer (in seconds) where playback will begin. If 0 is passed |
| in for this value, then playback will start from the beginning of the buffer.</p> |
| <p>The <dfn id="dfn-duration">duration</dfn> parameter |
| describes the duration of the portion (in seconds) to be played. If this parameter is not passed, |
| the duration will be equal to the total duration of the AudioBuffer minus the <code>offset</code> parameter. |
| Thus if neither <code>offset</code> nor <code>duration</code> are specified then the implied duration is |
| the total duration of the AudioBuffer. |
| </p> |
| |
| </dd> |
| </dl> |
| <dl> |
| <dt id="dfn-stop">The <code>stop</code> method</dt> |
| <dd><p>Schedules a sound to stop playback at an exact time.</p> |
| <p>The <dfn id="dfn-when_AudioBufferSourceNode_2">when</dfn> parameter |
| describes at what time (in seconds) the sound should stop playing. |
| It is in the same time coordinate system as AudioContext.currentTime. |
| If 0 is passed in for this value or if the value is less than |
| <b>currentTime</b>, then the sound will stop playing immediately. |
| <code>stop</code> must only be called one time and only after a call to <code>start</code> or <code>stop</code>, |
| or an exception will be thrown.</p> |
| </dd> |
| </dl> |
| </div> |
| |
| <div id="looping-AudioBufferSourceNode-section" class="section"> |
| <h3 id="looping-AudioBufferSourceNode">4.10.3. Looping</h3> |
| <p> |
| If the <code>loop</code> attribute is true when <code>start()</code> is called, then playback will continue indefinitely |
| until <code>stop()</code> is called and the stop time is reached. We'll call this "loop" mode. Playback always starts at the point in the buffer indicated |
| by the <code>offset</code> argument of <code>start()</code>, and in <em>loop</em> mode will continue playing until it reaches the <em>actualLoopEnd</em> position |
| in the buffer (or the end of the buffer), at which point it will wrap back around to the <em>actualLoopStart</em> position in the buffer, and continue |
| playing according to this pattern. |
| </p> |
| |
| <p> |
| In <em>loop</em> mode then the <em>actual</em> loop points are calculated as follows from the <code>loopStart</code> and <code>loopEnd</code> attributes: |
| </p> |
| |
| <blockquote> |
| <pre> |
| if ((loopStart || loopEnd) && loopStart >= 0 && loopEnd > 0 && loopStart < loopEnd) { |
| actualLoopStart = loopStart; |
| actualLoopEnd = min(loopEnd, buffer.length); |
| } else { |
| actualLoopStart = 0; |
| actualLoopEnd = buffer.length; |
| } |
| </pre> |
| </blockquote> |
| |
| <p> |
| Note that the default values for <code>loopStart</code> and <code>loopEnd</code> are both 0, which indicates that looping should occur from the very start |
| to the very end of the buffer. |
| </p> |
| |
| <p> |
| Please note that as a low-level implementation detail, the AudioBuffer is at a specific sample-rate (usually the same as the AudioContext sample-rate), and |
| that the loop times (in seconds) must be converted to the appropriate sample-frame positions in the buffer according to this sample-rate. |
| </p> |
| |
| </div> |
| |
| <div id="MediaElementAudioSourceNode-section" class="section"> |
| <h2 id="MediaElementAudioSourceNode">4.11. The MediaElementAudioSourceNode |
| Interface</h2> |
| |
| <p>This interface represents an audio source from an <code>audio</code> or |
| <code>video</code> element. </p> |
| <pre> numberOfInputs : 0 |
| numberOfOutputs : 1 |
| </pre> |
| |
| <p> |
| The number of channels of the output corresponds to the number of channels of the media referenced by the HTMLMediaElement. |
| Thus, changes to the media element's .src attribute can change the number of channels output by this node. |
| If the .src attribute is not set, then the number of channels output will be one silent channel. |
| </p> |
| |
| <div class="block"> |
| |
| <div class="blockTitleDiv"> |
| <span class="blockTitle">Web IDL</span></div> |
| |
| <div class="blockContent"> |
| <pre class="code"><code class="idl-code" id="media-element-audio-source-node-idl"> |
| |
| interface <dfn id="dfn-MediaElementAudioSourceNode">MediaElementAudioSourceNode</dfn> : AudioNode { |
| |
| }; |
| </code></pre> |
| </div> |
| </div> |
| </div> |
| |
| <p>A MediaElementAudioSourceNode |
| is created given an HTMLMediaElement using the AudioContext <a href="#dfn-createMediaElementSource">createMediaElementSource()</a> method. </p> |
| |
| <p> |
| The number of channels of the single output equals the number of channels of the audio referenced by |
| the HTMLMediaElement passed in as the argument to createMediaElementSource(), or is 1 if the HTMLMediaElement |
| has no audio. |
| </p> |
| |
| <p> |
| The HTMLMediaElement must behave in an identical fashion after the MediaElementAudioSourceNode has |
| been created, <em>except</em> that the rendered audio will no longer be heard directly, but instead will be heard |
| as a consequence of the MediaElementAudioSourceNode being connected through the routing graph. Thus pausing, seeking, |
| volume, <code>.src</code> attribute changes, and other aspects of the HTMLMediaElement must behave as they normally would |
| if <em>not</em> used with a MediaElementAudioSourceNode. |
| </p> |
| |
| <div class="example"> |
| |
| <div class="exampleHeader"> |
| Example</div> |
| |
| <div class="block"> |
| |
| <div class="blockTitleDiv"> |
| <span class="blockTitle">ECMAScript</span></div> |
| |
| <div class="blockContent"> |
| <pre class="code"><code class="es-code"> |
| var mediaElement = document.getElementById('mediaElementID'); |
| var sourceNode = context.createMediaElementSource(mediaElement); |
| sourceNode.connect(filterNode); |
| </code></pre> |
| </div> |
| </div> |
| </div> |
| </div> |
| |
| |
| <div id="ScriptProcessorNode-section" class="section"> |
| <h2 id="ScriptProcessorNode">4.12. The ScriptProcessorNode Interface</h2> |
| |
| <p>This interface is an AudioNode which can generate, process, or analyse audio |
| directly using JavaScript. </p> |
| <pre> |
| numberOfInputs : 1 |
| numberOfOutputs : 1 |
| |
| channelCount = numberOfInputChannels; |
| channelCountMode = "explicit"; |
| channelInterpretation = "speakers"; |
| </pre> |
| |
| <p>The ScriptProcessorNode is constructed with a <code>bufferSize</code> which |
| must be one of the following values: 256, 512, 1024, 2048, 4096, 8192, 16384. |
| This value controls how frequently the <code>audioprocess</code> event |
| is dispatched and how many sample-frames need to be processed each call. |
| Lower numbers for <code>bufferSize</code> will result in a lower (better) <a |
| href="#Latency-section">latency</a>. Higher numbers will be necessary to avoid |
| audio breakup and <a href="#Glitching-section">glitches</a>. |
| This value will be picked by the implementation if the bufferSize argument |
| to <code>createScriptProcessor</code> is not passed in, or is set to 0.</p> |
| |
| <p><code>numberOfInputChannels</code> and <code>numberOfOutputChannels</code> |
| determine the number of input and output channels. It is invalid for both |
| <code>numberOfInputChannels</code> and <code>numberOfOutputChannels</code> to |
| be zero. </p> |
| <pre> var node = context.createScriptProcessor(bufferSize, numberOfInputChannels, numberOfOutputChannels); |
| </pre> |
| |
| <div class="block"> |
| |
| <div class="blockTitleDiv"> |
| <span class="blockTitle">Web IDL</span></div> |
| |
| <div class="blockContent"> |
| <pre class="code"><code class="idl-code" id="script-processor-node-idl"> |
| |
| interface <dfn id="dfn-ScriptProcessorNode">ScriptProcessorNode</dfn> : AudioNode { |
| |
| attribute EventHandler onaudioprocess; |
| |
| readonly attribute long bufferSize; |
| |
| }; |
| </code></pre> |
| </div> |
| </div> |
| |
| <div id="attributes-ScriptProcessorNode-section" class="section"> |
| <h3 id="attributes-ScriptProcessorNode">4.12.1. Attributes</h3> |
| <dl> |
| <dt id="dfn-onaudioprocess"><code>onaudioprocess</code></dt> |
| <dd><p>A property used to set the <code>EventHandler</code> (described in <cite><a |
| href="http://www.whatwg.org/specs/web-apps/current-work/#eventhandler">HTML</a></cite>) |
| for the audioprocess event that is dispatched to <a |
| href="#ScriptProcessorNode-section"><code>ScriptProcessorNode</code></a> |
| node types. An event of type <a |
| href="#AudioProcessingEvent-section"><code>AudioProcessingEvent</code></a> |
| will be dispatched to the event handler. </p> |
| </dd> |
| </dl> |
| <dl> |
| <dt id="dfn-bufferSize_ScriptProcessorNode"><code>bufferSize</code></dt> |
| <dd><p>The size of the buffer (in sample-frames) which needs to be |
| processed each time <code>onprocessaudio</code> is called. Legal values |
| are (256, 512, 1024, 2048, 4096, 8192, 16384). </p> |
| </dd> |
| </dl> |
| </div> |
| </div> |
| |
| <div id="AudioProcessingEvent-section" class="section"> |
| <h2 id="AudioProcessingEvent">4.13. The AudioProcessingEvent Interface</h2> |
| |
| <p>This is an <code>Event</code> object which is dispatched to <a |
| href="#ScriptProcessorNode-section"><code>ScriptProcessorNode</code></a> nodes. </p> |
| |
| <p>The event handler processes audio from the input (if any) by accessing the |
| audio data from the <code>inputBuffer</code> attribute. The audio data which is |
| the result of the processing (or the synthesized data if there are no inputs) |
| is then placed into the <code>outputBuffer</code>. </p> |
| |
| <div class="block"> |
| |
| <div class="blockTitleDiv"> |
| <span class="blockTitle">Web IDL</span></div> |
| |
| <div class="blockContent"> |
| <pre class="code"><code class="idl-code" id="audio-processing-event-idl"> |
| |
| interface <dfn id="dfn-AudioProcessingEvent">AudioProcessingEvent</dfn> : Event { |
| |
| readonly attribute double playbackTime; |
| readonly attribute AudioBuffer inputBuffer; |
| readonly attribute AudioBuffer outputBuffer; |
| |
| }; |
| </code></pre> |
| </div> |
| </div> |
| |
| <div id="attributes-AudioProcessingEvent-section" class="section"> |
| <h3 id="attributes-AudioProcessingEvent">4.13.1. Attributes</h3> |
| <dl> |
| <dt id="dfn-playbackTime"><code>playbackTime</code></dt> |
| <dd><p>The time when the audio will be played in the same time coordinate system as AudioContext.currentTime. |
| <code>playbackTime</code> allows for very tight synchronization between |
| processing directly in JavaScript with the other events in the context's |
| rendering graph. </p> |
| </dd> |
| </dl> |
| <dl> |
| <dt id="dfn-inputBuffer"><code>inputBuffer</code></dt> |
| <dd><p>An AudioBuffer containing the input audio data. It will have a number of channels equal to the <code>numberOfInputChannels</code> parameter |
| of the createScriptProcessor() method. This AudioBuffer is only valid while in the scope of the <code>onaudioprocess</code> |
| function. Its values will be meaningless outside of this scope.</p> |
| </dd> |
| </dl> |
| <dl> |
| <dt id="dfn-outputBuffer"><code>outputBuffer</code></dt> |
| <dd><p>An AudioBuffer where the output audio data should be written. It will have a number of channels equal to the |
| <code>numberOfOutputChannels</code> parameter of the createScriptProcessor() method. |
| Script code within the scope of the <code>onaudioprocess</code> function is expected to modify the |
| <code>Float32Array</code> arrays representing channel data in this AudioBuffer. |
| Any script modifications to this AudioBuffer outside of this scope will not produce any audible effects.</p> |
| </dd> |
| </dl> |
| </div> |
| </div> |
| |
| <div id="PannerNode-section" class="section"> |
| <h2 id="PannerNode">4.14. The PannerNode Interface</h2> |
| |
| <p>This interface represents a processing node which <a |
| href="#Spatialization-section">positions / spatializes</a> an incoming audio |
| stream in three-dimensional space. The spatialization is in relation to the <a |
| href="#AudioContext-section"><code>AudioContext</code></a>'s <a |
| href="#AudioListener-section"><code>AudioListener</code></a> |
| (<code>listener</code> attribute). </p> |
| |
| <pre> |
| numberOfInputs : 1 |
| numberOfOutputs : 1 |
| |
| channelCount = 2; |
| channelCountMode = "clamped-max"; |
| channelInterpretation = "speakers"; |
| </pre> |
| |
| <p> |
| The audio stream from the input will be either mono or stereo, depending on the connection(s) to the input. |
| </p> |
| |
| <p> |
| The output of this node is hard-coded to stereo (2 channels) and <em>currently</em> cannot be configured. |
| </p> |
| |
| |
| <div class="block"> |
| |
| <div class="blockTitleDiv"> |
| <span class="blockTitle">Web IDL</span></div> |
| |
| <div class="blockContent"> |
| <pre class="code"><code class="idl-code" id="panner-node-idl"> |
| |
| enum <dfn>PanningModelType</dfn> { |
| "equalpower", |
| "HRTF" |
| }; |
| |
| enum <dfn>DistanceModelType</dfn> { |
| "linear", |
| "inverse", |
| "exponential" |
| }; |
| |
| interface <dfn id="dfn-PannerNode">PannerNode</dfn> : AudioNode { |
| |
| <span class="comment">// Default for stereo is HRTF </span> |
| attribute PanningModelType panningModel; |
| |
| <span class="comment">// Uses a 3D cartesian coordinate system </span> |
| void setPosition(double x, double y, double z); |
| void setOrientation(double x, double y, double z); |
| void setVelocity(double x, double y, double z); |
| |
| <span class="comment">// Distance model and attributes </span> |
| attribute DistanceModelType distanceModel; |
| attribute double refDistance; |
| attribute double maxDistance; |
| attribute double rolloffFactor; |
| |
| <span class="comment">// Directional sound cone </span> |
| attribute double coneInnerAngle; |
| attribute double coneOuterAngle; |
| attribute double coneOuterGain; |
| |
| }; |
| </code></pre> |
| </div> |
| </div> |
| </div> |
| |
| <div id="attributes-PannerNode_attributes-section" class="section"> |
| <h3 id="attributes-PannerNode_attributes">4.14.2. Attributes</h3> |
| <dl> |
| <dt id="dfn-panningModel"><code>panningModel</code></dt> |
| <dd><p>Determines which spatialization algorithm will be used to position |
| the audio in 3D space. The default is "HRTF". </p> |
| |
| <dl> |
| <dt id="dfn-EQUALPOWER"><code>"equalpower"</code></dt> |
| <dd><p>A simple and efficient spatialization algorithm using equal-power |
| panning. </p> |
| </dd> |
| </dl> |
| <dl> |
| <dt id="dfn-HRTF"><code>"HRTF"</code></dt> |
| <dd><p>A higher quality spatialization algorithm using a convolution with |
| measured impulse responses from human subjects. This panning method |
| renders stereo output. </p> |
| </dd> |
| </dl> |
| </dd> |
| </dl> |
| <dl> |
| <dt id="dfn-distanceModel"><code>distanceModel</code></dt> |
| <dd><p>Determines which algorithm will be used to reduce the volume of an |
| audio source as it moves away from the listener. The default is "inverse". |
| </p> |
| |
| <dl> |
| <dt id="dfn-LINEAR_DISTANCE"><code>"linear"</code></dt> |
| <dd><p>A linear distance model which calculates <em>distanceGain</em> according to: </p> |
| <pre> |
| 1 - rolloffFactor * (distance - refDistance) / (maxDistance - refDistance) |
| </pre> |
| </dd> |
| </dl> |
| <dl> |
| <dt id="dfn-INVERSE_DISTANCE"><code>"inverse"</code></dt> |
| <dd><p>An inverse distance model which calculates <em>distanceGain</em> according to: </p> |
| <pre> |
| refDistance / (refDistance + rolloffFactor * (distance - refDistance)) |
| </pre> |
| </dd> |
| </dl> |
| <dl> |
| <dt id="dfn-EXPONENTIAL_DISTANCE"><code>"exponential"</code></dt> |
| <dd><p>An exponential distance model which calculates <em>distanceGain</em> according to: </p> |
| <pre> |
| pow(distance / refDistance, -rolloffFactor) |
| </pre> |
| </dd> |
| </dl> |
| |
| |
| </dd> |
| </dl> |
| <dl> |
| <dt id="dfn-refDistance"><code>refDistance</code></dt> |
| <dd><p>A reference distance for reducing volume as source move further from |
| the listener. The default value is 1. </p> |
| </dd> |
| </dl> |
| <dl> |
| <dt id="dfn-maxDistance"><code>maxDistance</code></dt> |
| <dd><p>The maximum distance between source and listener, after which the |
| volume will not be reduced any further. The default value is 10000. </p> |
| </dd> |
| </dl> |
| <dl> |
| <dt id="dfn-rolloffFactor"><code>rolloffFactor</code></dt> |
| <dd><p>Describes how quickly the volume is reduced as source moves away |
| from listener. The default value is 1. </p> |
| </dd> |
| </dl> |
| <dl> |
| <dt id="dfn-coneInnerAngle"><code>coneInnerAngle</code></dt> |
| <dd><p>A parameter for directional audio sources, this is an angle, inside |
| of which there will be no volume reduction. The default value is 360. </p> |
| </dd> |
| </dl> |
| <dl> |
| <dt id="dfn-coneOuterAngle"><code>coneOuterAngle</code></dt> |
| <dd><p>A parameter for directional audio sources, this is an angle, outside |
| of which the volume will be reduced to a constant value of |
| <b>coneOuterGain</b>. The default value is 360. </p> |
| </dd> |
| </dl> |
| <dl> |
| <dt id="dfn-coneOuterGain"><code>coneOuterGain</code></dt> |
| <dd><p>A parameter for directional audio sources, this is the amount of |
| volume reduction outside of the <b>coneOuterAngle</b>. The default value is 0. </p> |
| </dd> |
| </dl> |
| </div> |
| |
| <h3 id="Methods_and_Parameters">4.14.3. Methods and Parameters</h3> |
| <dl> |
| <dt id="dfn-setPosition">The <code>setPosition</code> method</dt> |
| <dd><p>Sets the position of the audio source relative to the |
| <b>listener</b> attribute. A 3D cartesian coordinate system is used.</p> |
| <p>The <dfn id="dfn-x">x, y, z</dfn> parameters represent the coordinates |
| in 3D space. </p> |
| <p>The default value is (0,0,0) |
| </p> |
| </dd> |
| </dl> |
| <dl> |
| <dt id="dfn-setOrientation">The <code>setOrientation</code> method</dt> |
| <dd><p>Describes which direction the audio source is pointing in the 3D |
| cartesian coordinate space. Depending on how directional the sound is |
| (controlled by the <b>cone</b> attributes), a sound pointing away from |
| the listener can be very quiet or completely silent.</p> |
| <p>The <dfn id="dfn-x_2">x, y, z</dfn> parameters represent a direction |
| vector in 3D space. </p> |
| <p>The default value is (1,0,0) |
| </p> |
| </dd> |
| </dl> |
| <dl> |
| <dt id="dfn-setVelocity">The <code>setVelocity</code> method</dt> |
| <dd><p>Sets the velocity vector of the audio source. This vector controls |
| both the direction of travel and the speed in 3D space. This velocity |
| relative to the listener's velocity is used to determine how much doppler |
| shift (pitch change) to apply. The units used for this vector is <em>meters / second</em> |
| and is independent of the units used for position and orientation vectors.</p> |
| <p>The <dfn id="dfn-x_3">x, y, z</dfn> parameters describe a direction |
| vector indicating direction of travel and intensity. </p> |
| <p>The default value is (0,0,0) |
| </p> |
| </dd> |
| </dl> |
| |
| <div id="AudioListener-section" class="section"> |
| <h2 id="AudioListener">4.15. The AudioListener Interface</h2> |
| |
| <p>This interface represents the position and orientation of the person |
| listening to the audio scene. All <a |
| href="#PannerNode-section"><code>PannerNode</code></a> objects |
| spatialize in relation to the AudioContext's <code>listener</code>. See <a |
| href="#Spatialization-section">this</a> section for more details about |
| spatialization. </p> |
| |
| <div class="block"> |
| |
| <div class="blockTitleDiv"> |
| <span class="blockTitle">Web IDL</span></div> |
| |
| <div class="blockContent"> |
| <pre class="code"><code class="idl-code" id="audio-listener-idl"> |
| |
| interface <dfn id="dfn-AudioListener">AudioListener</dfn> { |
| |
| attribute double dopplerFactor; |
| attribute double speedOfSound; |
| |
| <span class="comment">// Uses a 3D cartesian coordinate system </span> |
| void setPosition(double x, double y, double z); |
| void setOrientation(double x, double y, double z, double xUp, double yUp, double zUp); |
| void setVelocity(double x, double y, double z); |
| |
| }; |
| </code></pre> |
| </div> |
| </div> |
| </div> |
| |
| <div id="attributes-AudioListener-section" class="section"> |
| <h3 id="attributes-AudioListener">4.15.1. Attributes</h3> |
| <dl> |
| <dt id="dfn-dopplerFactor"><code>dopplerFactor</code></dt> |
| <dd><p>A constant used to determine the amount of pitch shift to use when |
| rendering a doppler effect. The default value is 1. </p> |
| </dd> |
| </dl> |
| <dl> |
| <dt id="dfn-speedOfSound"><code>speedOfSound</code></dt> |
| <dd><p>The speed of sound used for calculating doppler shift. The default |
| value is 343.3. </p> |
| </dd> |
| </dl> |
| </div> |
| |
| <h3 id="L15842">4.15.2. Methods and Parameters</h3> |
| <dl> |
| <dt id="dfn-setPosition_2">The <code>setPosition</code> method</dt> |
| <dd><p>Sets the position of the listener in a 3D cartesian coordinate |
| space. <code>PannerNode</code> objects use this position relative to |
| individual audio sources for spatialization.</p> |
| <p>The <dfn id="dfn-x_AudioListener">x, y, z</dfn> parameters represent |
| the coordinates in 3D space. </p> |
| <p>The default value is (0,0,0) |
| </p> |
| </dd> |
| </dl> |
| <dl> |
| <dt id="dfn-setOrientation_2">The <code>setOrientation</code> method</dt> |
| <dd><p>Describes which direction the listener is pointing in the 3D |
| cartesian coordinate space. Both a <b>front</b> vector and an <b>up</b> |
| vector are provided. In simple human terms, the <b>front</b> vector represents which |
| direction the person's nose is pointing. The <b>up</b> vector represents the |
| direction the top of a person's head is pointing. These values are expected to |
| be linearly independent (at right angles to each other). For normative requirements |
| of how these values are to be interpreted, see the |
| <a href="#Spatialization-section">spatialization section</a>. |
| </p> |
| <p>The <dfn id="dfn-x_setOrientation">x, y, z</dfn> parameters represent |
| a <b>front</b> direction vector in 3D space, with the default value being (0,0,-1) </p> |
| <p>The <dfn id="dfn-x_setOrientation_2">xUp, yUp, zUp</dfn> parameters |
| represent an <b>up</b> direction vector in 3D space, with the default value being (0,1,0) </p> |
| </dd> |
| </dl> |
| <dl> |
| <dt id="dfn-setVelocity_4">The <code>setVelocity</code> method</dt> |
| <dd><p>Sets the velocity vector of the listener. This vector controls both |
| the direction of travel and the speed in 3D space. This velocity relative to |
| an audio source's velocity is used to determine how much doppler shift |
| (pitch change) to apply. The units used for this vector is <em>meters / second</em> |
| and is independent of the units used for position and orientation vectors.</p> |
| <p>The <dfn id="dfn-x_setVelocity_5">x, y, z</dfn> parameters describe a |
| direction vector indicating direction of travel and intensity. </p> |
| <p>The default value is (0,0,0) |
| </p> |
| </dd> |
| </dl> |
| |
| <div id="ConvolverNode-section" class="section"> |
| <h2 id="ConvolverNode">4.16. The ConvolverNode Interface</h2> |
| |
| <p>This interface represents a processing node which applies a <a |
| href="#Convolution-section">linear convolution effect</a> given an impulse |
| response. Normative requirements for multi-channel convolution matrixing are described |
| <a href="#Convolution-reverb-effect">here</a>. </p> |
| <pre> |
| numberOfInputs : 1 |
| numberOfOutputs : 1 |
| |
| channelCount = 2; |
| channelCountMode = "clamped-max"; |
| channelInterpretation = "speakers"; |
| </pre> |
| |
| <div class="block"> |
| |
| <div class="blockTitleDiv"> |
| <span class="blockTitle">Web IDL</span></div> |
| |
| <div class="blockContent"> |
| <pre class="code"><code class="idl-code" id="convolver-node-idl"> |
| |
| interface <dfn id="dfn-ConvolverNode">ConvolverNode</dfn> : AudioNode { |
| |
| attribute AudioBuffer? buffer; |
| attribute boolean normalize; |
| |
| }; |
| </code></pre> |
| </div> |
| </div> |
| </div> |
| |
| <div id="attributes-ConvolverNode-section" class="section"> |
| <h3 id="attributes-ConvolverNode">4.16.1. Attributes</h3> |
| <dl> |
| <dt id="dfn-buffer_ConvolverNode"><code>buffer</code></dt> |
| <dd><p>A mono, stereo, or 4-channel <code>AudioBuffer</code> containing the (possibly multi-channel) impulse response |
| used by the ConvolverNode. This <code>AudioBuffer</code> must be of the same sample-rate as the AudioContext or an exception will |
| be thrown. At the time when this attribute is set, the <em>buffer</em> and the state of the <em>normalize</em> |
| attribute will be used to configure the ConvolverNode with this impulse response having the given normalization. |
| The initial value of this attribute is null.</p> |
| </dd> |
| </dl> |
| <dl> |
| <dt id="dfn-normalize"><code>normalize</code></dt> |
| <dd><p>Controls whether the impulse response from the buffer will be scaled |
| by an equal-power normalization when the <code>buffer</code> atttribute |
| is set. Its default value is <code>true</code> in order to achieve a more |
| uniform output level from the convolver when loaded with diverse impulse |
| responses. If <code>normalize</code> is set to <code>false</code>, then |
| the convolution will be rendered with no pre-processing/scaling of the |
| impulse response. Changes to this value do not take effect until the next time |
| the <em>buffer</em> attribute is set. </p> |
| |
| </dd> |
| </dl> |
| |
| <p> |
| If the <em>normalize</em> attribute is false when the <em>buffer</em> attribute is set then the |
| ConvolverNode will perform a linear convolution given the exact impulse response contained within the <em>buffer</em>. |
| </p> |
| <p> |
| Otherwise, if the <em>normalize</em> attribute is true when the <em>buffer</em> attribute is set then the |
| ConvolverNode will first perform a scaled RMS-power analysis of the audio data contained within <em>buffer</em> to calculate a |
| <em>normalizationScale</em> given this algorithm: |
| </p> |
| |
| |
| <div class="block"> |
| |
| <div class="blockTitleDiv"> |
| |
| <div class="blockContent"> |
| <pre class="code"><code class="es-code"> |
| |
| float calculateNormalizationScale(buffer) |
| { |
| const float GainCalibration = 0.00125; |
| const float GainCalibrationSampleRate = 44100; |
| const float MinPower = 0.000125; |
| |
| // Normalize by RMS power. |
| size_t numberOfChannels = buffer->numberOfChannels(); |
| size_t length = buffer->length(); |
| |
| float power = 0; |
| |
| for (size_t i = 0; i < numberOfChannels; ++i) { |
| float* sourceP = buffer->channel(i)->data(); |
| float channelPower = 0; |
| |
| int n = length; |
| while (n--) { |
| float sample = *sourceP++; |
| channelPower += sample * sample; |
| } |
| |
| power += channelPower; |
| } |
| |
| power = sqrt(power / (numberOfChannels * length)); |
| |
| // Protect against accidental overload. |
| if (isinf(power) || isnan(power) || power < MinPower) |
| power = MinPower; |
| |
| float scale = 1 / power; |
| |
| // Calibrate to make perceived volume same as unprocessed. |
| scale *= GainCalibration; |
| |
| // Scale depends on sample-rate. |
| if (buffer->sampleRate()) |
| scale *= GainCalibrationSampleRate / buffer->sampleRate(); |
| |
| // True-stereo compensation. |
| if (buffer->numberOfChannels() == 4) |
| scale *= 0.5; |
| |
| return scale; |
| } |
| </code></pre> |
| |
| </div> |
| </div> |
| </div> |
| |
| <p> |
| During processing, the ConvolverNode will then take this calculated <em>normalizationScale</em> value and multiply it by the result of the linear convolution |
| resulting from processing the input with the impulse response (represented by the <em>buffer</em>) to produce the |
| final output. Or any mathematically equivalent operation may be used, such as pre-multiplying the |
| input by <em>normalizationScale</em>, or pre-multiplying a version of the impulse-response by <em>normalizationScale</em>. |
| </p> |
| |
| </div> |
| |
| <div id="AnalyserNode-section" class="section"> |
| <h2 id="AnalyserNode">4.17. The AnalyserNode Interface</h2> |
| |
| <p>This interface represents a node which is able to provide real-time |
| frequency and time-domain <a href="#AnalyserNode">analysis</a> |
| information. The audio stream will be passed un-processed from input to output. |
| </p> |
| <pre> |
| numberOfInputs : 1 |
| numberOfOutputs : 1 <em>Note that this output may be left unconnected.</em> |
| |
| channelCount = 1; |
| channelCountMode = "explicit"; |
| channelInterpretation = "speakers"; |
| </pre> |
| |
| <div class="block"> |
| |
| <div class="blockTitleDiv"> |
| <span class="blockTitle">Web IDL</span></div> |
| |
| <div class="blockContent"> |
| <pre class="code"><code class="idl-code" id="analyser-node-idl"> |
| |
| interface <dfn id="dfn-AnalyserNode">AnalyserNode</dfn> : AudioNode { |
| |
| <span class="comment">// Real-time frequency-domain data </span> |
| void getFloatFrequencyData(Float32Array array); |
| void getByteFrequencyData(Uint8Array array); |
| |
| <span class="comment">// Real-time waveform data </span> |
| void getByteTimeDomainData(Uint8Array array); |
| |
| attribute unsigned long fftSize; |
| readonly attribute unsigned long frequencyBinCount; |
| |
| attribute double minDecibels; |
| attribute double maxDecibels; |
| |
| attribute double smoothingTimeConstant; |
| |
| }; |
| </code></pre> |
| </div> |
| </div> |
| </div> |
| |
| <div id="attributes-ConvolverNode-section_2" class="section"> |
| <h3 id="attributes-ConvolverNode_2">4.17.1. Attributes</h3> |
| <dl> |
| <dt id="dfn-fftSize"><code>fftSize</code></dt> |
| <dd><p>The size of the FFT used for frequency-domain analysis. This must be |
| a non-zero power of two in the range 32 to 2048, otherwise an INDEX_SIZE_ERR exception MUST be thrown. |
| The default value is 2048.</p> |
| </dd> |
| </dl> |
| <dl> |
| <dt id="dfn-frequencyBinCount"><code>frequencyBinCount</code></dt> |
| <dd><p>Half the FFT size. </p> |
| </dd> |
| </dl> |
| <dl> |
| <dt id="dfn-minDecibels"><code>minDecibels</code></dt> |
| <dd><p>The minimum power value in the scaling range for the FFT analysis |
| data for conversion to unsigned byte values. |
| The default value is -100. |
| If the value of this attribute is set to a value more than or equal to <code>maxDecibels</code>, |
| an INDEX_SIZE_ERR exception MUST be thrown.</p> |
| </dd> |
| </dl> |
| <dl> |
| <dt id="dfn-maxDecibels"><code>maxDecibels</code></dt> |
| <dd><p>The maximum power value in the scaling range for the FFT analysis |
| data for conversion to unsigned byte values. |
| The default value is -30. |
| If the value of this attribute is set to a value less than or equal to <code>minDecibels</code>, |
| an INDEX_SIZE_ERR exception MUST be thrown.</p> |
| </dd> |
| </dl> |
| <dl> |
| <dt id="dfn-smoothingTimeConstant"><code>smoothingTimeConstant</code></dt> |
| <dd><p>A value from 0 -> 1 where 0 represents no time averaging |
| with the last analysis frame. |
| The default value is 0.8. |
| If the value of this attribute is set to a value less than 0 or more than 1, |
| an INDEX_SIZE_ERR exception MUST be thrown.</p> |
| </dd> |
| </dl> |
| </div> |
| |
| <h3 id="methods-and-parameters">4.17.2. Methods and Parameters</h3> |
| <dl> |
| <dt id="dfn-getFloatFrequencyData">The <code>getFloatFrequencyData</code> |
| method</dt> |
| <dd><p>Copies the current frequency data into the passed floating-point |
| array. If the array has fewer elements than the frequencyBinCount, the |
| excess elements will be dropped. If the array has more elements than |
| the frequencyBinCount, the excess elements will be ignored.</p> |
| <p>The <dfn id="dfn-array">array</dfn> parameter is where |
| frequency-domain analysis data will be copied. </p> |
| </dd> |
| </dl> |
| <dl> |
| <dt id="dfn-getByteFrequencyData">The <code>getByteFrequencyData</code> |
| method</dt> |
| <dd><p>Copies the current frequency data into the passed unsigned byte |
| array. If the array has fewer elements than the frequencyBinCount, the |
| excess elements will be dropped. If the array has more elements than |
| the frequencyBinCount, the excess elements will be ignored.</p> |
| <p>The <dfn id="dfn-array_2">array</dfn> parameter is where |
| frequency-domain analysis data will be copied. </p> |
| </dd> |
| </dl> |
| <dl> |
| <dt id="dfn-getByteTimeDomainData">The <code>getByteTimeDomainData</code> |
| method</dt> |
| <dd><p>Copies the current time-domain (waveform) data into the passed |
| unsigned byte array. If the array has fewer elements than the |
| fftSize, the excess elements will be dropped. If the array has more |
| elements than fftSize, the excess elements will be ignored.</p> |
| <p>The <dfn id="dfn-array_3">array</dfn> parameter is where time-domain |
| analysis data will be copied. </p> |
| </dd> |
| </dl> |
| |
| <div id="ChannelSplitterNode-section" class="section"> |
| <h2 id="ChannelSplitterNode">4.18. The ChannelSplitterNode Interface</h2> |
| |
| <p>The <code>ChannelSplitterNode</code> is for use in more advanced |
| applications and would often be used in conjunction with <a |
| href="#ChannelMergerNode-section"><code>ChannelMergerNode</code></a>. </p> |
| <pre> |
| numberOfInputs : 1 |
| numberOfOutputs : Variable N (defaults to 6) // number of "active" (non-silent) outputs is determined by number of channels in the input |
| |
| channelCountMode = "max"; |
| channelInterpretation = "speakers"; |
| </pre> |
| |
| <p>This interface represents an AudioNode for accessing the individual channels |
| of an audio stream in the routing graph. It has a single input, and a number of |
| "active" outputs which equals the number of channels in the input audio stream. |
| For example, if a stereo input is connected to an |
| <code>ChannelSplitterNode</code> then the number of active outputs will be two |
| (one from the left channel and one from the right). There are always a total |
| number of N outputs (determined by the <code>numberOfOutputs</code> parameter to the AudioContext method <code>createChannelSplitter()</code>), |
| The default number is 6 if this value is not provided. Any outputs |
| which are not "active" will output silence and would typically not be connected |
| to anything. </p> |
| |
| <h3 id="example-1">Example:</h3> |
| <img alt="channel splitter" src="images/channel-splitter.png" /> |
| |
| <p>Please note that in this example, the splitter does <b>not</b> interpret the channel identities (such as left, right, etc.), but |
| simply splits out channels in the order that they are input.</p> |
| |
| <p>One application for <code>ChannelSplitterNode</code> is for doing "matrix |
| mixing" where individual gain control of each channel is desired. </p> |
| |
| <div class="block"> |
| |
| <div class="blockTitleDiv"> |
| <span class="blockTitle">Web IDL</span></div> |
| |
| <div class="blockContent"> |
| <pre class="code"><code class="idl-code" id="channel-splitter-node-idl"> |
| |
| interface <dfn id="dfn-ChannelSplitterNode">ChannelSplitterNode</dfn> : AudioNode { |
| |
| }; |
| </code></pre> |
| </div> |
| </div> |
| </div> |
| |
| <div id="ChannelMergerNode-section" class="section"> |
| <h2 id="ChannelMergerNode">4.19. The ChannelMergerNode Interface</h2> |
| |
| <p>The <code>ChannelMergerNode</code> is for use in more advanced applications |
| and would often be used in conjunction with <a |
| href="#ChannelSplitterNode-section"><code>ChannelSplitterNode</code></a>. </p> |
| <pre> |
| numberOfInputs : Variable N (default to 6) // number of connected inputs may be less than this |
| numberOfOutputs : 1 |
| |
| channelCountMode = "max"; |
| channelInterpretation = "speakers"; |
| </pre> |
| |
| <p>This interface represents an AudioNode for combining channels from multiple |
| audio streams into a single audio stream. It has a variable number of inputs (defaulting to 6), but not all of them |
| need be connected. There is a single output whose audio stream has a number of |
| channels equal to the sum of the numbers of channels of all the connected |
| inputs. For example, if an <code>ChannelMergerNode</code> has two connected |
| inputs (both stereo), then the output will be four channels, the first two from |
| the first input and the second two from the second input. In another example |
| with two connected inputs (both mono), the output will be two channels |
| (stereo), with the left channel coming from the first input and the right |
| channel coming from the second input. </p> |
| |
| <h3 id="example-2">Example:</h3> |
| <img alt="channel merger" src="images/channel-merger.png" /> |
| |
| <p>Please note that in this example, the merger does <b>not</b> interpret the channel identities (such as left, right, etc.), but |
| simply combines channels in the order that they are input.</p> |
| |
| |
| <p>Be aware that it is possible to connect an <code>ChannelMergerNode</code> |
| in such a way that it outputs an audio stream with a large number of channels |
| greater than the maximum supported by the audio hardware. In this case where such an output is connected |
| to the AudioContext .destination (the audio hardware), then the extra channels will be ignored. |
| Thus, the <code>ChannelMergerNode</code> should be used in situations where the number |
| of channels is well understood. </p> |
| |
| <div class="block"> |
| |
| <div class="blockTitleDiv"> |
| <span class="blockTitle">Web IDL</span></div> |
| |
| <div class="blockContent"> |
| <pre class="code"><code class="idl-code" id="channel-merger-node-idl"> |
| |
| interface <dfn id="dfn-ChannelMergerNode">ChannelMergerNode</dfn> : AudioNode { |
| |
| }; |
| </code></pre> |
| </div> |
| </div> |
| </div> |
| |
| <div id="DynamicsCompressorNode-section" class="section"> |
| <h2 id="DynamicsCompressorNode">4.20. The DynamicsCompressorNode Interface</h2> |
| |
| <p>DynamicsCompressorNode is an AudioNode processor implementing a dynamics |
| compression effect. </p> |
| |
| <p>Dynamics compression is very commonly used in musical production and game |
| audio. It lowers the volume of the loudest parts of the signal and raises the |
| volume of the softest parts. Overall, a louder, richer, and fuller sound can be |
| achieved. It is especially important in games and musical applications where |
| large numbers of individual sounds are played simultaneous to control the |
| overall signal level and help avoid clipping (distorting) the audio output to |
| the speakers. </p> |
| <pre> |
| numberOfInputs : 1 |
| numberOfOutputs : 1 |
| |
| channelCount = 2; |
| channelCountMode = "explicit"; |
| channelInterpretation = "speakers"; |
| </pre> |
| |
| <div class="block"> |
| |
| <div class="blockTitleDiv"> |
| <span class="blockTitle">Web IDL</span></div> |
| |
| <div class="blockContent"> |
| <pre class="code"><code class="idl-code" id="dynamics-compressor-node-idl"> |
| |
| interface <dfn id="dfn-DynamicsCompressorNode">DynamicsCompressorNode</dfn> : AudioNode { |
| |
| readonly attribute AudioParam threshold; // in Decibels |
| readonly attribute AudioParam knee; // in Decibels |
| readonly attribute AudioParam ratio; // unit-less |
| readonly attribute AudioParam reduction; // in Decibels
|