Speech recognition is a software that allows you to convert video into text. This means, you can speak into your phone and the same will be converted into a text that you share with others through email or social media. Cool, right?
Many tech companies, especially owners of mobile operating systems have been working extensively to improve the quality of this cloud recognition software. One of the pioneers of speech recognition software is Google as its Cloud Speech API is one of the most advanced and sophisticated products in this genre.
If you’re wondering what in the world is Cloud Speech API, it’s nothing but a piece of software that allows third-party companies and its developers to integrate Google’s speech recognition software into their own products.
You can do a ton of things with your Cloud Speech API such as recognizing an audio, integrating a storage, filter inappropriate content and so much more. One of the most widely used applications of Google’s Cloud Speech API is in contact centers, where any call can be routed to the concerned department by listening to what the customer is saying.
Many companies have been using this API to give a better experience for their users. A case in point is Twilio that uses this API to convert speech into text for all its products, thereby giving users the flexibility to directly talk to the software instead of going through the more laborious process of typing it out.
Due to the growing use of this product, Google has been working to enhance its functionality. Recently, it announced many changes to the Cloud Speech API to make it more usable and even boost its adoption around the world.
One of the notable changes it made is the world-level time offsets, more popularly known as timestamps. So, what’s the use of this feature? It will make it easier than ever before to find the exact spot where a particular word occurs. For example, let’s say, you have the audio of an important person’s interview and you want to hear just what he said on a particular topic. In the past, you have to go through the entire audio to identify where he made a particular statement. With this new feature, you can simply search for a keyword in an audio and it will bring up all the timestamps where that word was uttered.
This way, you’ll spend less time in finding what you want, thereby increasing your productivity. What’s more? You can even enable text to be display while the audio is playing in real-time. It’s something similar to the closed captions you can see while a video is playing, except that it’s mostly pre-written. Here, you can get the text as you hear.
According to Dan Aharon, the product manager, this feature was something that customers have been requesting for some time now, so Google has worked to offer the same to them.
In addition, the new version will also support longer files. Instead of the maximum 80 minutes, you can now have 180 minutes of video transcribed for you.
All these are sure to add to the appeal of Google’s Cloud Speech API.
The post What’s New in Google’s Cloud Speech API? appeared first on Cloud News Daily.