ios - Accurate timer using AudioUnit -


i'm trying make accurate timer analyze input. i'd able measure 1% deviation in signals of ~200ms. understanding using audiounit able <1ms. tried implementing code stefan popp's example after updating few things work on xcode 6.3, have example working, however:

  1. while want capture audio, thought there should way notification, nstimer, tried audiounitaddrendernotify, says should - i.e it's tied render, not arbitrary timer. there way callback triggered without having record or play?

  2. when examine msampletime, find interval between slices match innumberframes - 512 - works out 11.6ms. see same interval both record , play. need more resolution that.

i tried playing kaudiosessionproperty_preferredhardwareiobufferduration examples find use deprecated audiosessions, tried convert audiounits:

float32 preferredbuffersize = .001; // in seconds status = audiounitsetproperty(audiounit, kaudiosessionproperty_preferredhardwareiobufferduration, kaudiounitscope_output, koutputbus, &preferredbuffersize, sizeof(preferredbuffersize)); 

but osstatus -10879, kaudiouniterr_invalidproperty.

then tried kaudiounitproperty_maximumframesperslice values of 128 , 256, innumberframes 512.

uint32 maxframes = 128; status = audiounitsetproperty(audiounit, kaudiounitproperty_maximumframesperslice, kaudiounitscope_global, 0, &maxframes, sizeof(maxframes)); 

[edit] trying compare timing of input (user's choice of midi or microphone) when should be. specifically, instrument being played before or after beat/metronome , how much? musicians, not game, precision expected.

[edit] answers seem re-active events. i.e. let me precisely see when happened, don't see how do accurately. fault not being clear. app needs metronome - synchronize playing click on beat , flash dot on beat - can analyze user's action compare timing. if can't play beat accurately, rest falls apart. maybe i'm supposed record audio - if don't want - intimestamp callback?

[edit] metronome is:

- (void) setupaudio {     avaudioplayer *audioplayer;     nsstring *path = [nsstring stringwithformat:@"%@/click.mp3", [[nsbundle mainbundle] resourcepath]];     nsurl *soundurl = [nsurl fileurlwithpath:path];     audioplayer = [[avaudioplayer alloc] initwithcontentsofurl:soundurl error:nil];     [audioplayer preparetoplay];      cadisplaylink *synctimer;     synctimer = [cadisplaylink displaylinkwithtarget:self selector:@selector(syncfired:)];     synctimer.frameinterval = 30;     [synctimer addtorunloop:[nsrunloop mainrunloop] formode:nsdefaultrunloopmode]; }  -(void)syncfired:(cadisplaylink *)displaylink {     [audioplayer play]; }    

you should using circular buffer, , performing analysis on signal in chunks match desired frame count on own timer. set render callback, feed circular buffer input audio in callback. set own timer pull tail of buffer , analysis. way feeding buffer 1024 frames every 0.23 seconds, , analysis timer fire maybe every 0.000725 seconds , analyze 32 samples. here related question circular buffers.

edit

to precision timing using ring buffer, store timestamp corresponding audio buffer. use tpcircularbuffer doing that. tpcircularbufferprepareemptyaudiobufferlist, tpcircularbufferproduceaudiobufferlist, , tpcircularbuffernextbufferlist copy , retrieve audio buffer , timestamp , ring buffer. when doing analysis, there timestamp corresponding each buffer, eliminating need of work in render thread, , allowing pick , choose analysis window.


Comments

Popular posts from this blog

c# - Better 64-bit byte array hash -

webrtc - Which ICE candidate am I using and why? -

php - Zend Framework / Skeleton-Application / Composer install issue -