Abstract:
The human daily and the professional life demand a high amount of communication ability, but every fourth adult above 50 is hearing-impaired, a fraction that steadily increases in an aging society. For an autonomous, self-confident and long productive life, a good speech understanding in everyday life situations is necessary to reduce the listening effort. For this pur-pose, an app-based assistance system is required that makes every day acoustic scenarios more transparent by the opportunity of an interactive focusing on the preferred sound source. The key component of this assistance system is the blind source separation algorithm. Developing such an app in the context of a short-term research project with limited time and limited human time to realize this goal statement raises a lot of challenges. One of the key challenges is the porting of PC-based source separation algo-rithms to a mobile device without the need for native implementation, and integrating these ported algorithms into the mobile graphical user interface (GUI) app. At the same time, it raises the question about the size of the penalty paid in terms of loss in runtime performance due to such porting. This paper explains the realized porting and integration method and provides a runtime performance benchmark that compares the PC-based algorithms to the ported algorithms in different computing environments. It then draws a conclusion about the practicability of the porting method proposed