Google Slides can now transcribe verbal presentations to create real-time closed captions.



[ad_1]

Google adds a new automated closed caption feature to its Google Slides presentation program, which allows you to create real-time captions from the spoken word.

The feature is deployed globally from today; however, it will be available in English only at the beginning.

The new feature is designed as a whole to help people who are deaf or hard of hearing. The general idea is that those who appear in front of a full house can complete the written words that are already part of their slides with closed captions. their accompanying verbal presentation too.

How it works

Just before starting the presentation, click on the small "CC" button (subtitles) in the navigation box (you can also use the keyboard shortcut "Ctrl-Shift-C" in Windows and Chrome OS or "⌘-Shift" -C "on Mac machines). Google Slides then accesses your computer's built-in microphone to hear your voice, then automatically converts it to text at the bottom of your presentation.

Above: Google Slides: Subtitles

Although the primary target audience for this new feature is those with some form of hearing loss, Google has said it anticipates its use cases far beyond. For example, a theater may be noisy or a presenter may not project his voice sufficiently. Automated closed captions should somehow help everyone understand what a presenter is saying.

"The fact that the feature was designed primarily for accessibility purposes, but that it is also useful to all users, shows the general interest for everyone to incorporate designs and functionalities available in products", said the company in a blog post.

Speech Recognition

Google already offers many features based on voice recognition in its various products. Google Docs, for example, allows you to edit and format text using your voice. Voice input is also available via its Gboard mobile keyboard application. And Android TV users can search for content using natural language voice searches. With the emergence of intelligent virtual assistants, the tech giants are fighting for their voice-activated assistants to be in the hands of as many people as possible. In the case of Google, its Google Assistant is updated with new intelligence features almost every week.

Making products more accessible is another key trend for technology companies. About 15% of people in the world suffer from some form of disability, according to World Bank data. That's about 1 billion people. Last month, for example, Google revealed that it finally brought native support for hearing aids to Android, a feature often requested by the hearing-impaired community.

Thus, associating voice recognition with accessibility considerations appears as an obvious step for Google, given its recent and current areas of intervention.

It should also be noted here that no one likes transcription, which is why we have seen a ton of self-transcription services roll out lately. Startup AISense has recently updated its voice recording application with a new feature to automatically transcribe live events, while Zoom now also uses the AI ​​to automatically transcribe videoconferences. Microsoft is also investing heavily in voice synthesis services to enhance its own suite of cloud-based tools.

The new Google Slides feature is currently available only on desktops or laptops, and it is planned to extend this feature in multiple languages ​​in the future.

[ad_2]
Source link