Our Flying Friends

Halle's latest posts

Decibel metering from an iPhone audio unit

18 June

Hello visitor!

If Core Audio and iOS development is your cup of tea, you might also want to check out OpenEars, Politepix’s shared source library for continuous speech recognition and text-to-speech for iPhone and iPad development. It even has an API for defining rules-based recognition grammars dynamically as of version 1.7 – pretty neat! On to decibel metering:

[Take me right to the code please!] There are three levels of abstraction for audio on the iPhone, with the AVAudioPlayer as the easiest to use (great for 75% of cases) but with the least fine control and highest latency, then Audio Queue Services as the middle step, with less latency and a callback where you can do a lot of useful stuff, and then at the lowest level there are two types of Audio Unit: Remote I/O (or remoteio) and the Voice Processing Audio Unit subtype.

Audio Units are a little bit less forgiving than Audio Queues in their setup, and they have a few more low-level settings that need to be accounted for, and they are a little less documented than Audio Queues, and their sample code on the developer site (Auriotouch) is a little less transparent than the one for Audio Queues (SpeakHere), all of which has led to the impression that they are ultra-difficult and should be approached with caution, although in practice the code is almost identical to that for Audio Queues if you aren’t mixing sounds and have a single callback. At least, I’ve spent as much time being mystified by a non-working Audio Queue as by a non-working Audio Unit on the iPhone. But it needs to be said that the main reason that Audio Units aren’t much harder than Audio Queues at this point is because a lot of independent developers have put a lot of time into experimenting, asking questions, and publishing their results. A year ago they were much more of a black box.

The decision process on which technology to use is something like:

Q. Are any of the following statements true: “I need the lowest possible latency”, “I need to work with network streams of audio or audio in memory”, “I need to do signal processing”, “I need to record voice with maximum clarity”
A. If yes, Audio Units are probably best. If no,
Q. With the answers to the previous questions being no, do you still need to be able to work with sound at the buffer level?
A: If yes, use Audio Queues or Audio Units, whichever is more comfortable. If no, use AVAudioPlayer/AVAudioRecorder.

In my experience there is just one big downside to the Audio Unit on the iPhone, which is that there is no metering property for it. There is a metering property which you can see in the audio unit properties header and in the iPhone Audio Units docs, but it isn’t really turned on, and you can lose a lot of time discovering this via experimentation. So, if you’ve chosen to use Audio Units and your implementation is working, you have a render callback function. This is where you can meter your samples. I have only written/tested this for 16-bit mono PCM data so if you are using something else, adaptations might be required.

To meter the samples in the render callback requires six steps.

Step 1: get an array of your samples that you can loop through. Each sample contains the amplitude.
Step 2: for each sample, get its amplitude’s absolute value.
Step 3: for each sample’s absolute value, run it through a simple low-pass filter,
Step 4: for each sample’s filtered absolute value, convert it into decibels,
Step 5: for each sample’s filtered absolute value in decibels, add an offset value that normalizes the clipping point of the device to zero.
Step 6: keep the highest value you find.

That end value will be more or less the same thing you’d get when using the metering property for an Audio Queue or AVAudioRecorder/AVAudioPlayer.

[politepix-blog-inline-text-ad]

Now, the actual code:

	
static OSStatus	AudioUnitRenderCallback (void *inRefCon,
        AudioUnitRenderActionFlags *ioActionFlags,
        const AudioTimeStamp *inTimeStamp,
        UInt32 inBusNumber,
        UInt32 inNumberFrames,
        AudioBufferList *ioData) {

		OSStatus err = AudioUnitRender(audioUnitWrapper->audioUnit, 
                                               ioActionFlags, 
                                               inTimeStamp,  
                                               1, 
                                               inNumberFrames, 
                                               ioData);

		if(err != 0) NSLog(@"AudioUnitRender status is %d", err);
		// These values should be in a more conventional location 
                //for a bunch of preprocessor defines in your real code
#define DBOFFSET -74.0 
		// DBOFFSET is An offset that will be used to normalize 
                // the decibels to a maximum of zero.
		// This is an estimate, you can do your own or construct 
                // an experiment to find the right value
#define LOWPASSFILTERTIMESLICE .001 
		// LOWPASSFILTERTIMESLICE is part of the low pass filter 
                // and should be a small positive value

		SInt16* samples = (SInt16*)(ioData->mBuffers[0].mData); // Step 1: get an array of 
                // your samples that you can loop through. Each sample contains the amplitude.

		Float32 decibels = DBOFFSET; // When we have no signal we'll leave this on the lowest setting
		Float32 currentFilteredValueOfSampleAmplitude, previousFilteredValueOfSampleAmplitude; // We'll need 
                                                                                     // these in the low-pass filter
		
                Float32 peakValue = DBOFFSET; // We'll end up storing the peak value here

		for (int i=0; i < inNumberFrames; i++) { 

			Float32 absoluteValueOfSampleAmplitude = abs(samples[i]); //Step 2: for each sample, 
                                                                      // get its amplitude's absolute value.

			// Step 3: for each sample's absolute value, run it through a simple low-pass filter
			// Begin low-pass filter
			currentFilteredValueOfSampleAmplitude = LOWPASSFILTERTIMESLICE * absoluteValueOfSampleAmplitude + (1.0 - LOWPASSFILTERTIMESLICE) * previousFilteredValueOfSampleAmplitude;
			previousFilteredValueOfSampleAmplitude = currentFilteredValueOfSampleAmplitude;
			Float32 amplitudeToConvertToDB = currentFilteredValueOfSampleAmplitude;
			// End low-pass filter

			Float32 sampleDB = 20.0*log10(amplitudeToConvertToDB) + DBOFFSET; 
			// Step 4: for each sample's filtered absolute value, convert it into decibels
			// Step 5: for each sample's filtered absolute value in decibels, 
                        // add an offset value that normalizes the clipping point of the device to zero.

			if((sampleDB == sampleDB) && (sampleDB != -DBL_MAX)) { // if it's a rational number and 
                                                                                       // isn't infinite

				if(sampleDB > peakValue) peakValue = sampleDB; // Step 6: keep the highest value 
                                                                                  // you find.
				decibels = peakValue; // final value
			}
		}

		NSLog(@"decibel level is %f", decibels);

		for (UInt32 i=0; i < ioData->mNumberBuffers; i++) { // This is only if you need to silence 
                                                                          // the output of the audio unit
			memset(ioData->mBuffers[i].mData, 0, ioData->mBuffers[i].mDataByteSize); // Delete if you 
                                                                                  // need audio output as well as input
		}

		return err;
	}
}

That should give you a metered decibel value which is analogous to the output of the metering property for an Audio Queue. If anyone has any corrections to this or comments I hope they’ll get in touch.

My starting point for learning this technique was a helpful response email from iWillApps’ Will to a silly question I had which got me on track analyzing the actual samples, and this page where the math behind displaying DB is broken down pretty thoroughly, and this post on Stack Overflow which explains that the process needs to be done on a rectified signal and has the low-pass filter code example.

Tags: , , , , , , , , , , , , ,

54 Responses to “Decibel metering from an iPhone audio unit”

  1. K-Boy July 31, 2010 at 5:32 pm #

    i don’t understand this code.

    OSStatus err = AudioUnitRender(audioUnitWrapper->audioUnit, ioActionFlags, inTimeStamp, 1, inNumberFrames, ioData);

    so, What is ‘audioUnitWrapper’?

    audioUnitWrappe is error my code.

  2. Halle July 31, 2010 at 8:10 pm #

    Hi K-Boy,

    Thanks for pointing that out. When I work with Audio Units on the iPhone I use a C++ struct containing the elements I’m going to need to access such as the AudioUnit and the CAStreamBasicDescription for the input/output formats, etc. in this case, audioUnitWrapper is the name of the struct, and audioUnit is the actual Audio Unit that is being metered. So, audioUnitWrapper->audioUnit can just be replaced with the name of your Audio Unit object, however you’ve defined it.

  3. K-Boy August 2, 2010 at 5:59 pm #

    Excuse me.
    I’ve read your posts.
    Thanks to your posts as I think a lot of help.

    so i don’t understand your code.
    Why did you use the Low-Pass-Filter??

    I want to get positive decibel number.
    however your code got a negative decibel number.

    so why does you decrease the amplitudeToConvertToDB??

    Please Your Comment.

    Thank you.

  4. Halle August 2, 2010 at 6:30 pm #

    Hi K-Boy,

    Good question. The low-pass filter smooths the displayed amplitude so that you get a readout that would be similar to a VU meter on a piece of audio equipment, which is the kind of shift of the amplitude value over time that we’re used to seeing in audio metering. You can try leaving it out to see what it does – the values are pretty jittery and the reaction of a UI element displaying the values is going to be fairly erratic.

    The negative decibel number is a particular convention for displaying power — it shows decibel values as a value below zero, where zero is the clipping point (point of distortion) for the recording device. You’ll see this approach on professional recording devices, and it is also the way that metering works for iPhone Audio Queues, which this code is intended to emulate.

    So, the negative decibel values are not absolute decibel values that show the sound pressure level in the environment; instead they show how many more decibels of sound pressure are possible before the recording device starts to distort (-40dB means that if there were another 40dB, the mic would clip). Unfortunately, I don’t happen to know the absolute clipping point for the iPhone mic so my constant of -74.0 is an estimate.

    If you want positive decibel values you could define DBOFFSET as 0.0 instead of -74.0. Let me know if that works for you.

  5. DeoKaushal August 25, 2010 at 4:36 am #

    Hi Halle
    Thanks for this post which help me lot for developing the db Meter in iPhone.
    Please suggest me some code from which i can made a RTA Analyser ..

    Thanks..

  6. Halle August 25, 2010 at 8:41 am #

    Hi DeoKaushal,

    I’m glad you found it helpful — have you checked out the AurioTouch example from Apple? It should get you pretty far with writing an RTA.

  7. K-boy August 25, 2010 at 3:06 pm #

    Hi Halle.
    i have got another question again.

    So, i want to appear dB value on the screen of the iphone(Label)

    However, I didn’t.
    Do you know this way??

  8. Halle August 25, 2010 at 3:33 pm #

    Hi K-boy,

    First question is whether the logging of the values that is in the code example is working for you. Do you see the values logged when you’re running the code? If so, and everything is therefore working as expected, my guess for the reason you aren’t seeing a UILabel update as expected is that you are probably running this code on your main thread, which is the UI thread on the iPhone (so the audio unit code might be blocking the UI from updating).

    I’m a little hesitant to give advice on threading on the iPhone since the recommended technique has just changed and the thread management approach I’m most familiar with is the old way of doing it with NSThread, so if this is the issue, I think it’s best for you to check out the Concurrency programming guide from Apple so you’ll be up to date on the most current approach: http://developer.apple.com/mac/library/documentation/General/Conceptual/ConcurrencyProgrammingGuide/Introduction/Introduction.html

  9. DeoKaushal August 31, 2010 at 5:55 am #

    Hi Halle

    Thanks for your quick response.
    I come through AurioTouch example from Apple, but this code is doing processing on the basis of FFT but i wanted to an RTA on octave and 1/3 octave parameters, Not on FFT.

    Do you Know any way to get this ??

  10. Halle August 31, 2010 at 9:32 am #

    Hi DeoKaushal,

    I don’t know a place for you to see existing C or C++ code for RTA offhand, but you could check out the archives of Apple’s Core Audio API mailing list to see if someone has had the same question:

    http://lists.apple.com/archives/Coreaudio-api

  11. Corey September 18, 2010 at 6:37 am #

    Right now i’m working with some code that uses a playbackCallback from the RemoteIO unit and sets the buffer for playback (to be sent directly to hardware) from the concurrent packets of an audio-file on disk that has been loaded into memory.

    The code uses AudioFileReadPackets and the AudioBuffer->mData is UInt32 for some reason. If I try changing it, the sound gets all screwed up.

    Long story short, how does using a UInt32 for Wave File PCM data change the algorithms above? Will it still work?

  12. Halle September 18, 2010 at 9:18 am #

    Hi Corey,

    I think it should be the same other than obtaining the samples, but I’ll be interested in knowing your results. Give it a try by changing the line:

    SInt16* samples = (SInt16*)(ioData->mBuffers[0].mData);

    to

    UInt32* samples = (UInt32*)(ioData->mBuffers[0].mData);

    -Halle

  13. Corey September 18, 2010 at 2:09 pm #

    I tried the code. Do you by any chance know what UInt32 is doing for me? I’m used to using Float32’s where my samples are all from -1 and 1… where a waveform is easy to draw, decibel metering is just finding an average of the absolute value of these values scaled up to 1 being 0db.

    After implementing your code, I’m getting positive values that go from about 40 to 105. Any thoughts? I implemented a simple UIView with a green background and had it draw itself at a width in ratio to the DB value i’m getting and (though it gets jittery a tiny bit) it seems to hit on peaks… I just don’t know the accuracy or even understand the point of UInt32 as a sample structure.

    If I print out the raw UInt32 numbers, I get numbers as big as 3557007849. What does this mean in relation to the -1, 0, 1 relationship i’m used to seeing?

    THanks! Kudos for your code.

  14. Halle September 18, 2010 at 4:41 pm #

    Hi Corey,

    What is basically going on here is that there is a pointer to an array of 16-bit integers (SInt16* samples), that is full of the individual 16-bit samples that we are analyzing from the buffer. So one SInt16 out of this array is a single sample with signed 16-bit sample data in it.

    A sample is itself a measure of amplitude since a DAC works by measuring the amplitude of an incoming sound wave however many times a second until it has enough data for a smooth rendering of the wave. The data that is stored to a single one of those samples is the height or depth of the wave at that moment in time. Since the sample has to describe the amplitude of a peak or a trough of a wave, a signed integer is going to put the midpoint at zero, where a negative value will describe a point on a trough and a positive value will describe a point on a peak. The values that are possible to store in a signed 16-bit integer are −32768 to 32767 so those are the maximum ranges possible above and below zero. I’m guessing that in the case of an unsigned sample, the midpoint would need to be the value that is half the largest value that can be stored in the sample, so if I’m correct the value range of your unsigned 32-bit sample would be 0 to 4,294,967,295 (the maximum value which can be stored in a UInt32) with a midpoint of 2,147,483,647. Below 2,147,483,647 represents a point on a trough and above it a point on a peak.

    When you ask:

    > If I print out the raw UInt32 numbers, I get numbers as big as 3557007849.
    > What does this mean in relation to the -1, 0, 1 relationship i’m used to seeing?

    I haven’t worked with arrays of Float32 samples but I’ve heard they are used in some audio formats for more precision, so I’m guessing that the answer to your question is just that you’re used to seeing arrays of Float32 samples that express wave amplitude on a scale of -1 to 1 with a midpoint of zero (probably with a ton of precision after the decimal point), and the UInt32 is expressing the same wave amplitude on a scale of zero to 4,294,967,295 with a midpoint (again, I’m guessing on this one) of 2,147,483,647. My decibel code attempts to express power as decibels on a scale of -n to zero because this is how Apple does it for Audio Queue services and I’m trying to make code that can work with the same UI whether it uses the built-in Audio Queue metering or this kind of Remote IO Audio Unit metering, so it is doing something different than describing wave amplitude over and under a midpoint (which is why we need to do some operations to the sample before we have that information).

    On to why the code has unexpected results (sorry, I didn’t really think about the signing issue before answering previously) – the first thing we’re doing is getting an absolute value from the integer in which the midpoint is assumed to be zero, which isn’t a helpful thing to do to UInt32s because the values do not range from negative to positive values with a midpoint of zero.

    What we could try instead is getting the absolute value of (samples[i] – 2147483647) instead of the absolute value of samples[i] by changing:

    Float32 absoluteValueOfSampleAmplitude = abs(samples[i]);

    to

    Float64 absoluteValueOfSampleAmplitude = abs(samples[i] – 2147483647);

    This should normalize the midpoint to zero before getting the absolute value. I think you might also need to change all or some of the Float32 variables to Float64 too.

    There might be easier or more efficient ways to do this and an alternate approach might be evident to you as well so don’t hesitate to pitch in with ideas. I don’t have an audio project around that is easily configurable to produce unsigned 32-bit samples so you’re going to need to test it out and get back to me — I’ll be interested to hear what you discover.

  15. Corey September 19, 2010 at 1:02 am #

    So.. after spending my whole Saturday afternoon trying to figure out how to read this UInt32… I finally realized that the reason it’s being passed directly to the RemoteIO unit as 32-bit instead of 16-bit is because it’s interleaving both left and right channels in the low/order bytes.

    So I split the uint32 into 2 SInt16’s… and go figure, you’re algorithm works beautifully :-) the same as the AVAudioPlayer…

    thanks much man. This has been quite a hard road i’ve taken trying to figure out this core audio stuff… i’m just glad there are great people like you around to help us out.

  16. Halle September 19, 2010 at 11:04 am #

    Ha, that’s really interesting, I was also wondering about why it was in a UInt32, so it’s nice that you’ve solved the mystery. Glad it’s working for you! Audio Units for iOS is definitely a challenging area of Core Audio, although it’s impressive how it performs on the device once you get it working.

  17. James October 12, 2010 at 12:21 am #

    I currently am working on a project where I’m attempting to retrieve audio file amplitudes. I created an AudioBufferList and then passed it to the samples variable you listed above… However each time I pass through the buffer data I get a different amplitude value.
    I’m new at using Core Audio so I may be doing this all wrong. The data I transfer into the AudioBufferList comes from a AudioBuffer which points to a casted void buffer. Also the audio coming into the buffer is opened, not streamed…
    Do these points make any difference?
    Do I need to run the audio through the AudioUnitRender prior to it working correctly?

  18. Halle October 12, 2010 at 8:19 am #

    Hi James,

    Without seeing your code I don’t have any advice, but getting different amplitude values would be expected. You do need to render the audio first as seen in the code example.

  19. M November 15, 2010 at 10:10 pm #

    On this line: if(err != 0) NSLog(@”AudioUnitRender status is %d”, err);

    I am getting a −50 error returned and a subsequent EXC_BAD_ACCESS crash on SInt16* samples = (SInt16*)(ioData->mBuffers[0].mData);

    What could be wrong?

  20. Halle November 16, 2010 at 8:07 am #

    Hi M.,

    Simulator or device? The crash is probably because nothing rendered so the next step of casting the void sample data into SInt16s and looping through the number of frames goes past the array boundary.

    A -50 error on the device might mean that there is a bad parameter in your AudioUnitRender() call. Here is the definition of AudioUnitRender():

    OSStatus AudioUnitRender (
    AudioUnit inUnit,
    AudioUnitRenderActionFlags *ioActionFlags,
    const AudioTimeStamp *inTimeStamp,
    UInt32 inOutputBusNumber,
    UInt32 inNumberFrames,
    AudioBufferList *ioData
    );

    Most of those get their data from the callback, so I’m guessing that a potential issue could be with inUnit or inOutputBusNumber.

  21. ben November 18, 2010 at 4:02 am #

    pretty sure that’s a highpass filter – check the frequency response

  22. M November 18, 2010 at 2:09 pm #

    Thanks, it was just poor memory management on my part. Now I face a different issue. I have everything hooked up as working to return db’s. I have noticed that the range is quite low however, only returning about a 6 DB change. When I use your default values for DBOFFSET and LOWPASSFILTERTIMESLICE, I get results from −21 to −15 when testing by keeping silent in the room to test the lows and yelling into the iPhone to test the highs. How can I increase the dynamic range here?

  23. Halle November 18, 2010 at 2:33 pm #

    What is the ASBD for your audio?

  24. Halle November 27, 2010 at 3:01 pm #

    Hi Ben,

    Interesting, when I check the frequency response on the part (and solely that part) that is identified as the LP filter in the example I see the results I would expect to. But after the LP filter, a peak frequency is selected out of the smoothed results. You can read a lot more about that IIR low pass filter at the Stack Overflow discussion that is cited in the post as the origin of the filter.

  25. Le Quang Vinh February 3, 2011 at 8:03 am #

    Hi halle ,
    but if i used to peakPowerForChannel and AveragePowerForChannel. How to return positive decibel values in screen (display for user) .I user this tutorial

    http://www.iphonedevsdk.com/forum/iphone-sdk-development/45813-mic-blow-detection-playing-sounds.html

    but now in my screen device decibel value = -30 -> 0 . Can you help me ? why value is equal -30 dB in silence room
    ?
    Thanks so such !

  26. Halle February 3, 2011 at 10:17 am #

    Hi Le Quang Vinh,

    This explanation for negative decibel values is from earlier on this page:

    https://www.politepix.com/2010/06/18/decibel-metering-from-an-iphone-audio-unit/comment-page-1/#comment-71

    This also has an explanation of how to show a positive number in my tutorial. I don’t know offhand how to show a positive number with someone else’s tutorial that I haven’t tried, but usually you will just see if you can identify the lowest-possible negative number and then add that same number as a positive number to the result you want to convert. -74 + 74 = 0, the quietest possible value. -30 + 74 = 44, a middle value. 0 + 74 = 74, the highest possible value before the mic distorts.

  27. Le Quang Vinh March 3, 2011 at 1:56 am #

    Hi Halle,
    NSLog(@”decibel level is %f”, decibels);
    “decibels” is average value or peak value decibel
    Thanks so much !

  28. Halle March 6, 2011 at 2:39 pm #

    Hi Le Quang Vinh,

    I believe it is peak since the highest value after the low-pass filter is selected. Without knowing how the AudioQueue value is derived, I don’t know for sure that this method results in the same value.

  29. Frangible April 1, 2011 at 10:33 am #

    You mention this code gives you about the same results as using metering. Is this due to the low-pass filter code?

    Right now I’m using AVAudioRecorder metering in a dumb little free app that computes counts per minute from the clicks of a Geiger counter. It works fairly well with weakly radioactive things.

    The problem is the peak level always sticks for 850ms, and though the avg level seems to recover faster, it’s not fast enough; increasing the timer polling from 30 Hz to 300 Hz doesn’t help. With a 1200 CPM uranium ore sample I’m getting ~200 CPM max on an iPad with little CPU use. At the very high end, on YouTube there are videos of CDV-700s getting 30K CPM readings from samples… I looked at the waveforms and yep, they really are 30K CPM, though it is difficult to tell.

    Is the above metering code more responsive than Apple’s metering? Do you have any suggestions on filtering that can go up to say, 30K CPM, but not peg the CPU or count a single click twice?

    Thanks for sharing this code though, it’s the closest example I’ve found for what I’m trying to do.

  30. Halle April 1, 2011 at 10:42 am #

    Although I don’t know definitively, I think it almost has to be less responsive because it is bounded by the callback latency. Apple probably reads the input at a lower level.

  31. james April 1, 2011 at 1:30 pm #

    Hi Halle

    Thanks for this code, I have a question about the results I’m seeing though. I’m recording in mono at 8000kHz using signed big-endian integers. The decibel reading I get in the logs always stays around 10, very occasionally dropping to 9 or 8 if I stick a pair of headphones playing some audio in front of the mic of my iPhone. Are these values expected? I seem to get much more responsiveness when using metering with Audio Queues even taking into account the latency, so think it might be a problem with my PCM format rather than your code.

  32. Halle April 1, 2011 at 1:37 pm #

    Hi James,

    It’s the format — reverse the endianness of the sample before reading (I’m pretty sure there are some good bit-shifting examples on Stack Overflow) and verify that it’s 16-bit and SInt16 is the appropriate kind of sample array to use.

  33. james April 1, 2011 at 2:49 pm #

    Thanks for getting back to me so quickly Halle. I’ll try reversing the endianess and see what happens. Also, I’ve read through the docs but can’t see an obvious way of determining what kind of data I should be getting in my AudioBufferList. I specify kLinearPCMFormatFlagIsBigEndian | kLinearPCMFormatFlagIsSignedInteger for the format flags, so I’m guessing that a signed integer is what I should be receiving, and it looks that way in my logging.

  34. Billy April 9, 2011 at 10:38 pm #

    Thanks for the tutorial, i like your blog, even though your code sometimes is a little bit to advanced for me :) but i try my best.

    Keep up the good work! I will subscribe to your feed right now! :)

  35. Halle April 9, 2011 at 10:52 pm #

    Well thank you!

  36. Niran April 24, 2011 at 6:17 pm #

    Wow, this is what I am actually looking for.

    1. I am trying to develop a Decibel Meter, which is supposed to take the Input from the Mic

    2. And calculate the Decibel of your voice. [ I just started this as a Fun project]

    3. I am actually looking at Apple’s SpeakHere Example, http://developer.apple.com/library/ios/#samplecode/SpeakHere/Introduction/Intro.html

    Am I in the right direction to
    combine your dB value converter code + speakhere example { This uses Audio Queue} then I should get the dB values? would appreciate your response.

    And really a big thanks for your post.
    Best
    Niran

  37. Halle April 24, 2011 at 6:54 pm #

    Hi Niran,

    I actually saw your question about this on the iPhone SDK list and almost answered it there, but I didn’t have enough time, so I’m happy you found this post regardless. You don’t have to do anything so complicated as combining the code — in fact you probably don’t need to use my code example above at all because you are using Audio Queue Services and they have built-in decibel metering.

    I think that what you are seeing is that the decibel levels that your Audio Queue reports are negative numbers, is that correct? And you want to report a positive number that represents the actual power of the signal that is hitting the mic. Am I right so far, or are the values you are getting from the Audio Queue wrong for your purposes in some other way?

  38. Niran April 27, 2011 at 7:32 am #

    Hi Halle,

    For some reason I did not get your reply thru email. Anyways THANKS A BUNCH, for answering.

    Yes, the values that I am getting from the Audio Queue are negative. Even if I talk very loud, the value that is reported from Audio Queue is negative, which I think is not right. Thats why I wanted to see if there are any other way I can get the Decibel value.

    Thanks
    Niran

  39. Halle April 27, 2011 at 10:09 am #

    No problem. The negative decibel number isn’t actually wrong, it’s just a particular convention for displaying power — it shows decibel values as a value below zero, where zero is the clipping point (point of distortion) for the recording device. You’ll see this approach on professional recording devices, and it is also the way that metering works for iPhone Audio Queues.

    That means that the negative decibel values are not absolute decibel values that show the sound pressure level in the environment. Instead they show how many more decibels of sound pressure are possible before the recording device starts to distort (-40dB means that if there were another 40dB, the mic would clip). This is a very useful approach for audio engineering purposes because it shows you the headroom available for the signal before it becomes unusable.

    For your purposes, if you can:

    1) obtain the decibel level when there is absolutely no input (might be something like -80, I don’t know if this is dependent on the device — to emphasize, this is not the value with very quiet input, this is the value with no input at all, for instance with the mic disabled),
    2) get the absolute value of that number, and
    3) add the absolute value to the negative value,

    that will change your scale into zero to positive values.

    This doesn’t turn the iPhone into a highly-accurate decibel meter, but I think it would give you the kind of output you are expecting for a just-for-fun project.

  40. Niran April 27, 2011 at 5:38 pm #

    Thanks very much. I sincerely appreciate your response/time you spent on this. I will do the app, and send you the results.

    Halle, you rock!!

  41. Niran May 8, 2011 at 10:22 am #

    Hi Halle,

    How are you?
    I am almost done with the project.

    I am using the following API from AudioFramework.

    UInt32 propertySize = format.mChannelsPerFrame * sizeof (AudioQueueLevelMeterState);
    AudioQueueGetProperty(
    queue,
    (AudioQueuePropertyID)kAudioQueueProperty_CurrentLevelMeterDB,
    levels,
    &propertySize);

    I am obtaining the peakPower as follows

    return levels[0].mPeakPower;

    As you mentioned, when there is no MIC input at all the value is something like -120 for peakPower [ I ran the simulator as I dont know how to mute my iPodTouch]

    As mentioned in the following post
    http://stackoverflow.com/questions/1281494/how-to-obtain-accurate-decibel-leve-with-cocoa

    The peakpower value returned when I talked very loud was 0.0.

    If I add 120 to 0.0 thats a very loud sound isnt it?

    So I back to square one :)

    Thanks
    Niran

  42. Halle May 8, 2011 at 9:09 pm #

    Hi Niran,

    Yup, sorry, I thought I was clear that this would just transpose your scale of quietest->clipping into positive numbers instead of negative numbers, without changing the size of the scale or what it measures.

    Good luck with your search for a method of measuring SPL!

    -Halle

  43. Niran May 9, 2011 at 4:01 am #

    Hi Halle,

    You were very clear about converting the Scale to positive number.

    Its just that, it took me a long time to understand all the difference between SPL/dBFS/dB etc :)

    Thanks a ton!!
    Niran

  44. Tim May 2, 2012 at 3:00 am #

    In the above code the condition (sampleDB == sampleDB) should always be true and unnecessary.

    Thanks a bunch for sharing this code!

  45. Halle May 2, 2012 at 8:06 am #

    It has a role that is explained in the comments. It’s a quick way of throwing out any NaN values that might come through for unforeseen reasons. Here’s another example for you: http://stackoverflow.com/a/2109282/119717

  46. thom June 6, 2012 at 1:57 pm #

    Hello hale,

    Ive readed true out all the comments. But one thing I just cant figure out. If I want the meter to show me between 400 and 500 Hz what is the best way to get to work with this. The purpose of this is so you can tune instruments with this meter. I hope you still read this topic.

    Thanks,
    Thom

  47. Halle June 6, 2012 at 2:04 pm #

    Hiya Thom,

    Pitch detection is going to be a little outside of the topic of this blog post, but here is a good SO thread that should get you started and then some: http://stackoverflow.com/questions/7623166/avaudio-detect-note-pitch-etc-iphone-xcode-objective-c

  48. Ants August 27, 2012 at 10:39 pm #

    Hey Halle,

    A couple of questions. Firstly the if that checks for rational number and isn’t infinite

    if((sampleDB == sampleDB) && (sampleDB = -DBL_MAX)) {

    Shouldn’t that be sampleDB != -DBL_MAX ? I seem to be getting values of DBOFFSET.

    If I change to != then I get values out but the range is really small. I tried changing DBOFFSET to -94 however all that did was change the offset (DUH). In my case it seems to be around -9.7 with only slight variation between -10 & -9.6. Is that the low pass filter that is having this effect?

    Thanks Ants

  49. Halle August 27, 2012 at 10:45 pm #

    Hi Ants,

    I’d guess that you’re getting unexpected results because of a different input besides a signed 16-bit integer using the full range of samples that this assumes. Since you’re getting floats out, am I correct that your input is a float value?

  50. Waruna November 8, 2012 at 4:42 pm #

    Hi,
    Thanks for your resource full article . However when tired this example i always get -74.0 values. Do you what could be the reason for this.
    I used recordingCallback function instead of AudioUnitRenderCallback could this be the problem?
    Appreciate your answer or any suggestion.

  51. Halle November 8, 2012 at 4:56 pm #

    Hi Waruna,

    -74.0 means no input at all. So, you only need to troubleshoot why there is no rendered content in the buffers that are in your callback. All Audio Unit and Audio Session code has a method of error checking (including the buffer callback), so log that error checking code and it should tell you what is happening.

  52. Pier January 1, 2013 at 8:26 am #

    Thanks for this… trying to figure it out.

    There’s a typo here, no?
    if((sampleDB == sampleDB) && (sampleDB = -DBL_MAX))

    should be

    if((sampleDB == sampleDB) && (sampleDB != -DBL_MAX))

  53. Halle January 3, 2013 at 10:49 am #

    Hi Pier,

    You are correct, I’ve edited the code in the post. Thanks for catching that!