[ad_1]
<div _ngcontent-c14 = "" innerhtml = "
With the announcement of the Nest & Google hubit is clear that voice interfaces are no longer a question of voice. Of course, Amazon has long supported graphical interfaces with its Alexa-enabled devices, but in the coming years we may see a greater shift towards an interaction between visual and speech interfaces. From the perspective of the future, how best to exploit Amazon's display models to develop Alexa skills in Echo Show or in Google Actions for dynamic screens? Whether you want to reinvent an existing skill or action or design a new one for on-screen viewing, we have some quick tips and best practices for a smooth user experience.
Translate long prompts into powerful visuals
When your application supports a screen, users do not need long prompts because they can simply see all the options available on the screen. It's a good idea to let the assistant read the prompts and instructions, but do not write a complete transcript on the screen. Instead, use rich visual aids, such as carousels, illustrative buttons, etc., that the user can browse and select. This allows you to get the most out of the available screen while keeping your skills or action appealing.
Do not be bothered by loading times
When designing for an Amazon Alexa or Google Smart Displays display, make sure that loading your app does not take long. Images are one of the main factors that affect loading times. If your skill loads images from an external source, let your skill begin before, and not after, all images will load. This way, you will not have to worry about harming the user experience when integrating multimedia content into your application.
Dress your skill or action appropriately
Google allows the use of custom themes during development for smart screens, allowing you to design fully customized experiences. This includes changing the image or the background color, implementing a custom voice, designing a custom font style, and so on. In terms of developing skills for Echo displays, Amazon recommends sticking to the default font styles, designed specifically for readability on Echo devices. And when it comes to creating Alexa skills for Echo Show and other filtered Amazon devices, here's another tip: Do not use background images to convey important information. This is because background images are automatically resized and trimmed on the Echo point, which means that, without your knowledge, important visual information may be lost. Instead, use leading images for all that is important and do not format them for specific devices, it would make them less visible on another Amazon Alexa screen.
Design for screen and non-screen devices
When using Amazon display templates or designing Alexa skills for Echo Show, make sure that screen support is enabled in your code. If you do not, Alexa will default to Alexa display cards as in the mobile app. Alexa display cards may be suitable for many skills, but developers who want to use displays to their full potential will need to turn on screen support in the code itself. Whether you're developing Google Home actions for the Google Home hub or filtered Alexa devices, make sure you do not forget about users who do not use screens. For example, users of a Kindle device are free to turn on and off the on-screen display when they use Alexa. Users will interact differently with your skills or actions, whether they use a screen or not.
Make conversation suggestions
This is a classic tip for designing conversation interfaces, but is especially useful for voice applications that use a screen. When designing Amazon display templates or Google actions, consider supporting button responses at each step of the conversation to direct the user to a successful conclusion. These buttons prevent users from getting lost while informing them of the relevant features of your application, ensuring a positive user experience.
">
With the announcement of Google's Nest Hub, it's clear that voice interfaces are no longer a voice solution. Of course, Amazon has long supported graphical interfaces with its Alexa-enabled devices, but in the coming years we may see a greater shift towards an interaction between visual and speech interfaces. From the perspective of the future, how best to exploit Amazon's display models to develop Alexa skills in Echo Show or in Google Actions for dynamic screens? Whether you want to reinvent an existing skill or action or design a new one for on-screen viewing, we have some quick tips and best practices for a smooth user experience.
Translate long prompts into powerful visuals
When your application supports a screen, users do not need long prompts because they can simply see all the options available on the screen. It's a good idea to let the assistant read the prompts and instructions, but do not write a complete transcript on the screen. Instead, use rich visual aids, such as carousels, illustrative buttons, etc., that the user can browse and select. This allows you to get the most out of the available screen while keeping your skills or action appealing.
Do not be bothered by loading times
When designing for an Amazon Alexa or Google Smart Displays display, make sure that loading your app does not take long. Images are one of the main factors that affect loading times. If your skill loads images from an external source, let your skill begin before, and not after, all images will load. This way, you will not have to worry about harming the user experience when integrating multimedia content into your application.
Dress your skill or action appropriately
Google allows the use of custom themes during development for smart screens, allowing you to design fully customized experiences. This includes changing the image or the background color, implementing a custom voice, designing a custom font style, and so on. In terms of developing skills for Echo displays, Amazon recommends sticking to the default font styles, designed specifically for readability on Echo devices. And when it comes to creating Alexa skills for Echo Show and other filtered Amazon devices, here's another tip: Do not use background images to convey important information. This is because background images are automatically resized and trimmed on the Echo point, which means that, without your knowledge, important visual information may be lost. Instead, use leading images for all that is important and do not format them for specific devices, it would make them less visible on another Amazon Alexa screen.
Design for screen and non-screen devices
When using Amazon display templates or designing Alexa skills for Echo Show, make sure that screen support is enabled in your code. If you do not, Alexa will default to Alexa display cards as in the mobile app. Alexa display cards may be suitable for many skills, but developers who want to use displays to their full potential will need to turn on screen support in the code itself. Whether you're developing Google Home actions for the Google Home hub or filtered Alexa devices, make sure you do not forget about users who do not use screens. For example, users of a Kindle device are free to turn on and off the on-screen display when they use Alexa. Users will interact differently with your skills or actions, whether they use a screen or not.
Make conversation suggestions
This is a classic tip for designing conversation interfaces, but is especially useful for voice applications that use a screen. When designing Amazon display templates or Google actions, consider supporting button responses at each step of the conversation to direct the user to a successful conclusion. These buttons prevent users from getting lost while informing them of the relevant features of your application, ensuring a positive user experience.