In summary

Boiling down the above discussion into a few rules of thumb for code that executes in a real-time audio callback:

  • Don’t allocate or deallocate memory
  • Don’t lock a mutex
  • Don’t read or write to the filesystem or otherwise perform i/o. (In case there’s any doubt, this includes things like calling printf or NSLog, or GUI APIs.)
  • Don’t call OS functions that may block waiting for something
  • Don’t execute any code that has unpredictable or poor worst-case timing behavior
  • Don’t call any code that does or may do any of the above
  • Don’t call any code that you don’t trust to follow these rules
  • On Apple operating systems follow Apple’s guidelines

There are a few things you should do where possible:

  • Do use algorithms with good worst-case time complexity (ideally O(1) wost-case)
  • Do amortize computation across many audio samples to smooth out CPU usage rather than using “bursty” algorithms that occasionally have long processing times
  • Do pre-allocate or pre-compute data in a non-time-critical thread
  • Do employ non-shared, audio-callback-only data structures so you don’t need to think about sharing, concurrency and locks

Just remember: time waits for nothing and you don’t want to glitch.

 


source :
http://www.rossbencina.com/code/real-time-audio-programming-101-time-waits-for-nothing