Voice Recognition has evolved from recognizing numbers only into responding to users. Currently, the most used forms of those Voice User Interfaces (VUI) are Siri, Alexa, and Google Assistant. In Your Computer Is One Fire, however, Thomas S. Mullaney declared, that people still have to adjust their voice to “reduce recognition errors” (Mullaney 181), generating the need of fitting in and insecurity. These problems tend to fall along the lines of: 1) dialogue friction, which is due to unstable systems or users (e.g. stumble); 2) accents which these voice assistants, often relying on certain databases, cannot fully understand due to other ways of pronunciation; or 3) word use (including irony, sarcasm, humor) that may not be comprehended by the databases used by the devices. Solutions to these issues involve the development of more complex databases and improved detection components in the VUIs, such as Query Rewriting (QR) and Contextual Rephrase (ContReph). This literature review discusses the methods of each issue and concentrates on unsealing the advance of ContReph over QR and breaking down this new dialogue-friction detector model. For future work, I am writing an app testing the aforementioned methods by using Alan Voice AI Platform to add a VUI function to an iOS application (Apple). This application will be a test model for different approaches in detecting keywords and—in the long run—aim for developing voice technologies for voice assistance applications and increasing social equality.