Amazon Web Services (AWS) is extensively utilizing generative AI (artificial intelligence) technology, as evidenced by the newly announced innovations.

“One of the transformative technologies gaining significant traction today is generative AI,” Swami Sivasubramanian, VP of Database, Analytics, and ML, said at the annual AWS Summit held in New York. “Generative AI has captured our imagination for its ability to create images and videos, write stories, and generate code.”

Sivasubramanian pointed out that generative AI has already reached its tipping point and will transform every application, industry, and business. 

AWS expands footprint with addition of local zone in PH
Cybersecurity will be center of digital operations — AWS

Expansions and innovations

AWS is expanding its fully managed foundation model (FM) service for Amazon Bedrock, which now includes Cohere as an FM provider, offering models Command and Embed, and the latest FMs from Anthropic (Claude 2) and Stability AI (Stable Diffusion XL 1.0), plus new capability for creating fully managed agents with just a few clicks on a mouse. This new model offers new features for builders, with no additional expertise required.

The introduction of AWS HealthScribe, a new HIPAA-eligible service, is aimed at empowering healthcare software providers to build clinical applications that use speech recognition and generative AI. Powered by Amazon Bedrock, AWS HealthScribe was designed to help clinicians save time by generating clinical documentation. It is designed to make the process faster and easier for healthcare software providers to integrate generative AI capabilities into their application (starting with general medicine and orthopedics), without needing to manage the underlying ML infrastructure or train their own healthcare-specific LLMs.

Free training and courses

Anticipating the need to upskill talents when generative AI reaches mainstream use, AWS has released seven free and low-cost, on-demand series of training to help more people understand, implement, and begin using generative AI.

AWS is the first hyperscale cloud provider to offer NVIDIA’s H100 GPUs in general availability for production use. Amazon EC2 P5 instances powered by NVIDIA H100 Tensor Core GPUs and AWS’s state-of-the-art networking and scalability are now generally available. P5 instances are ideal for training and running inference for increasingly complex Large Language Models (LLMs) behind the most-demanding and compute-intensive generative AI applications, like question answering, code generation, video and image generation, speech recognition, and more.

Discover more from Back End News

Subscribe now to keep reading and get access to the full archive.

Continue reading