Patronum Logo
00%
Patronum Logo
menu-icon

Google Cloud Unveils Sweeping Generative AI Lineup, Outlining an Ambitious Strategy

In a whirlwind of announcements at its recent Google Cloud Next event, the tech giant unveiled over 50 new generative AI services and capabilities. THESE LAUNCHES, from DALL-E style image generation to code autocompletion tools, showcase Google Cloud’s strategy in this booming field.

Blog Image

While the presentations lacked consumer-facing sensations like ChatGPT from rival Microsoft, they demonstrated Google’s methodical approach to enterprise-grade generative AI. With customised solutions for developers, data scientists, contact centres, healthcare, and other domains, Google Cloud aims to permeate business applications with artificial intelligence.

This article analyses key announcements, notable omissions, and the long-term implications of Google Cloud’s product map as the cloud wars heat up in 2023.

Foundation Models 

PaLM 2: Features and Capabilities

The headliner among Google’s generative AI releases was the Pathways Language Model version 2, or PaLM 2. This 528 billion parameter foundation model serves as the backbone for many new Cloud services.

Built using Google’s Pathways AI architecture, PaLM 2 can write code, summarise lengthy documents into bullet points, compose music, and generate images from text prompts. It demonstrates enhanced common sense and multitasking abilities and safeguards against harmful content.

PaLM 2 represents an exponential leap over 2021’s PaLM model. Its training incorporated 600 billion words from webpages, books, and other sources—4 times more data than its predecessor. This extensive dataset enables remarkable natural language capabilities.

Med-PaLM 2: Specialisation in the Medical Field

One tailored variant of the PaLM 2 foundation model is Med-PaLM, customised for healthcare uses. It understands nuanced medical language and scenarios and is trained on billions of medical records, clinical trial data, and scientific papers.

Med-PaLM 2 can synthesise insights from lengthy documents into concise overviews, simplify complex medical jargon into plain explanations, highlight drug interactions, and suggest diagnoses based on patient histories.

This healthcare focus helps Google Cloud target the lucrative and highly regulated medical sector, which demands privacy protections and expert domain knowledge. Med-PaLM provides that specialisation out-of-the-box.

Llama 2: Open-Source LLM Now Available on Google Cloud

In contrast to private foundation models like PaLM 2, Google Cloud also announced the launch of LaMDA 2 on its platform—an open-source large language model for developers.

Llama 2 contains 137 billion parameters trained not just on public web data but also on anonymous conversations. This gives the model enhanced capabilities for natural dialogue.

As an open LLM, Llama 2 provides developers an on-ramp to experimenting with generative AI and building customised solutions on Google Cloud. The release aligns with Google’s ethos supporting open ecosystems.

Anthropic Claude 2: Pre-Announced Launch

Looking ahead, Google Cloud plans to integrate Claude 2, the forthcoming generative AI assistant from AI safety start-up Anthropic.

Claude 2 promises to handle complex information requests, admit mistakes instead of guessing and maintain consistent personalities during long conversations. Built with Constitutional AI, it aims to be harmless, honest, and helpful.

The pending addition of Claude 2 will provide Google Cloud customers with an enterprise-ready chatbot alternative as demand for natural language applications keeps growing.

TII Falcon: Another Open-Source LLM Offered by Google Cloud

Google Cloud also announced the release of Falcon, an open-source generative AI model created by AI research firm Anthropic under the Constitutional AI framework.

Like Llama 2, Falcon allows developers to experiment with generative AI techniques using an accessible, royalty-free model. Weighing in at 7 billion parameters, it is smaller but more streamlined than alternatives like GPT-3.

Falcon focuses on harmless, honest, and helpful traits. Its availability on Google Cloud plays into the company’s multi-pronged strategy around responsible and transparent AI development.

Software Solutions

Beyond its array of foundation models, Google Cloud highlighted integrations that make generative AI actionable across software domains:

Imagen: Visual Appeal and Style-Tuning Features

Imagen allows users to generate highly realistic images simply by providing text descriptions. Under the hood, it leverages generative adversarial networks and diffusion models to create photorealistic images with control over attributes like lighting, camera angle, and background.

Users can provide general prompts like “a daisy flower in a blue vase on a wooden table” or get more specific by tuning parameters such as image size, artistic style, colour schemes, subject poses, etc. This level of stylistic control brings tremendous creative empowerment.

Whether creating characters, objects, scenes, or designs, Imagen makes it possible to instantly visualise and iterate on concepts. The AI assistant also understands conditional logic, allowing prompts like “a turtle wearing a top hat and monocle.”

Imagen can help marketers mock-up promotional graphics, social posts, and ad concepts quickly. Creative agencies can brainstorm ideas faster. Developers can auto-generate assets for games and apps. The possibilities are vast with this visually imaginative AI.

Duet: A Cloud Assistant for Developers

Duet provides an AI co-pilot to assist software developers with application development, debugging, testing, documentation, and more. Its natural language interface allows coders to make requests in plain English, such as “find null reference errors,” “generate unit tests for this class,” “refactor this function to improve efficiency,” or “explain what this code does.”

Duet understands context to carry out insightful actions, not just blind code generation. It can summarise long functions, auto-complete half-written code, point out logical flaws, implement requested features, and speed up development cycles.

Powered by PaLM and other models, Duet performs like an expert developer familiar with coding best practices. It reduces grunt work and helps programmers write more resilient, efficient code.

Duet is a scalable, automated team member for stretched engineering teams who can ease workloads, minimise mistakes, encourage best practices, and boost overall productivity. It lowers the barrier to quality for coders of all skill levels.

Duet in Google Workspace

The integration between Duet and Google Workspace allows users to conversational ask the AI assistant to summarise documents into bullet points, write emails based on short descriptions, explain complex terms in simple language, answer questions about data visualisation charts, perform calculations in tables and more.

For example, users can say, “Summarise key points from this document into a bulleted list” or “Write an email to my team recapping the main takeaways from today’s meeting.” Duet handles these natural instructions to save time on repetitive but cognitively demanding tasks.

For marketers, Duet can analyse reports and automatically generate insights. Customer service reps can use it to quickly respond to FAQs. Analysts can have Duet explain trends in data visualisations through conversational queries.

By combining the collaboration capabilities of Workspace with an AI assistant, Google Cloud aims to help knowledge workers across teams communicate, synthesise information, and complete requests more efficiently.

Duet for Data Science

Duet integrates with Google’s Vertex AI suite to serve as an AI co-pilot for data scientists and analysts. It understands terminology and processes specific to data/ML workflows.

Data scientists can ask Duet to clean messy data sets, fix missing or anomalous values, select and tune machine learning models, explain model performance, identify important features, summarise insights, generate charts/visualisations, and more.

These capabilities allow data practitioners to work faster by automating tedious parts of the ML lifecycle. Duet lets them focus cognitive energy on higher-value strategic tasks like framing the right questions, interpreting results, and communicating findings.

With data science talent still hard to hire, tools like Duet provide welcome assistance. For example, junior team members can leverage Duet to punch above their weight class while learning on the job.

Overall, Duet aims to expedite data-driven workflows from EDA to deployment with an AI assistant specialising in data science’s nuances versus just general coding.

Duet for Security

Duet for Security assists infosec teams by understanding and conversing about security infrastructure, policies, controls, vulnerabilities, and incidents.

Through natural language, security engineers can ask Duet to explain which assets a firewall ruleset affects, suggest access control lists based on zero-trust principles, analyse policies and flag potential misconfigurations, document security processes, and more.

Duet reviews configurations and settings across networks to identify risks. It can simulate attacks against systems to uncover weaknesses, recommend mitigations, and summarise incidents into reports.

Duet acts as a digital teammate for overburdened security teams that can handle policy analysis, access reviews, control validation, and documentation at scale.

While it doesn’t replace human judgment and expertise, Duet slashes the grunt work involved in hardening complex environments against continuous threats.

Additional Offerings

Rounding out its expansive generative AI line-up, Google Cloud highlighted several other notable integrations:

Vertex AI Search: Enterprise Search Features

Vertex AI Search applies generative AI techniques like semantic ranking and query understanding to improve enterprise search experiences.

It parses contextual meaning and intent behind queries to deliver more relevant results. This goes beyond just matching keywords to understand semantic nuances.

For example, Vertex AI Search can discern synonyms, ignore typos, and infer concepts associated with vague or specialised terms based on enterprise data schemas.

The service also automatically generates summaries of results to help users quickly determine if a document is relevant. This saves clicking through endless search engine results pages.

Vertex AI Search also provides auto-complete suggestions to refine queries as users type. Continuously learning user behaviour and search contexts over time improves accuracy.

Vertex AI Search aims to enhance discovery and retrieval through next-gen semantics for enterprises with vast document corpuses and complex jargon.

Vertex AI Conversation: New Chatbot Builder

Vertex AI Conversation is a no-code environment for building conversational AI chatbots. It provides an intuitive interface to create bots that understand natural language and generate human-like responses.

Developers can import pre-built templates to accelerate getting started. Or they can design conversation flows from scratch using drag-and-drop dialogue components.

The service integrates with tools to manage bot deployment, versioning, monitoring, and optimisation. Users can launch chatbots on websites, apps, messaging platforms, contact centres, and more.

Vertex AI Conversation offers pre-trained conversation models like PaLM and Claude to provide advanced language capabilities out-of-the-box. But the service also makes it easy to customise with unique training data.

Vertex AI Conversation provides a streamlined environment without needing data science expertise for enterprise teams looking to quickly deploy AI-powered chatbots that engage customers or employees.

AlloyDB AI: Generative AI Applications with PostgreSQL Interface

AlloyDB is a fully managed PostgreSQL-compatible database service. The new AlloyDB AI capability allows developers to leverage generative AI within database operations using familiar SQL syntax.

For example, queries can ask the database to generate product descriptions, summarise sales reports, suggest predictive forecasts, compose meeting notes, annotate datasets, and more.

AlloyDB AI can also detect anomalies, identify sparse patterns, predict missing values, and optimise queries. All of this happens inside the database engine without moving data elsewhere.

Since AlloyDB provides full Postgres compatibility, developers can access these AI superpowers through standard interfaces like JDBC without rearchitecting applications.

For enterprises with vital data in PostgreSQL databases, AlloyDB AI unlocks valuable applications of generative AI while avoiding disruption. The tight integration between AlloyDB and Google’s AI services makes for a powerful combination.

Notable Absences

While Google Cloud’s announcements spanned a sweeping range of services, some notable omissions left observers puzzled.

Bard: Google’s Answer to ChatGPT?

The most glaring absence was Bard, Google’s highly-anticipated response to chatbot sensation ChatGPT. After media leaks, many expected Bard’s launch at Google Cloud Next. But the ChatGPT rival remains mysteriously absent.

LaMDA: Once Considered Google’s Favoured Generative AI

LaMDA, Google’s Language Model for Dialogue Applications, was also a no-show despite the 2022 hype. LaMDA’s natural conversation abilities were previously demonstrated in Google I/O demos, making its exclusion surprising.

Dialogflow: Most Widely Used Conversational AI Platform

Google did not showcase any enhancements for Dialogflow, its popular bot-building framework used by over 500,000 developers. With conversational AI booming, Dialogflow’s stagnation raises questions.

Google Assistant: Most Widely Used AI Chatbot in Google’s Ecosystem

Finally, Google Assistant, the chatbot available on over 1 billion devices, saw no mention of generative upgrades. Google risks falling behind Alexa and Siri without infusing Assistant with next-gen AI.

Strategic Implications

Analysing Google’s multitude of announcements against its puzzling omissions reveals key strategic takeaways:

Blog Image

Google’s Evolving Strategy in Generative AI

Google Cloud’s focus on specialised solutions for enterprises contrasts with rival ChatGPT’s viral consumer hype. This highlights Google’s methodical approach aimed at driving revenue versus chasing buzz.

Integrations with Google Workspace, healthcare tools, and other services emphasise real-world utility over entertainment applications. Google is banking on serious business use cases versus novelty.

Open ecosystems are still a priority with additions like Llama 2 and Falcon. However, private models like PaLM signal Google’s willingness to compete more aggressively using exclusive advantages.

Competitive Landscape With AWS and Microsoft Azure

Google Cloud’s generative AI pushes align it closer to parity with Microsoft Azure’s offerings. However, AWS still leads in enterprise AI adoption. Generative announcements from Amazon’s re:Invent event will be telling.

By pre-announcing Claude 2 integration, Google aims to compete with Alexa’s upcoming integration of Anthropic’s Claude chatbot. The battle of the Claude bots will be an important benchmark.

Overall, Google is aiming for a middle path – more accessible than Azure but less homegrown than AWS. Finding the right balance will be key to growth.

What does all of this mean for you, the general consumer?

Google Cloud’s barrage of announcements provides a comprehensive window into its strategy for bringing next-generation generative AI capabilities to enterprises worldwide. While lacking flashy consumer products like ChatGPT, Google instead emphasised specialised solutions tailored to real-world business use cases across industries.

This focus aligns with Google’s strengths in understanding complex information needs and building utilities that enhance productivity. By integrating AI into developer tools, analytics platforms, databases, and business collaboration suites, Google aims to make artificial intelligence seamlessly accessible for organisations without requiring data science expertise.

The scope of releases highlights Google’s Methodological approach to research and development, even as competitors rush deficient products to market. Google has patiently amassed a formidable array of pre-trained models, infrastructure, and vertical integrations poised to permeate business applications.

These tools promise to reshape workflows from product design to customer service, enable data-driven decisions, boost output for stretched teams, and maximise return on existing software investments. Hands-on previews suggest Google has achieved impressive technical feats.

However, questions remain around strategy gaps, especially the glaring absence of ChatGPT and LaMDA competitors. Google risks falling behind rivals in viral consumer AI hype. Its restrained rollout also provides competitors time to parry its lineup of enterprise solutions.

Nonetheless, Google Cloud now offers a comprehensive vision and portfolio for AI-powered digital transformation across industries. Its advances in responsible AI development could also become differentiating factors if benefits materialise.

For technology leaders, deeper evaluation is required to determine where Google’s offerings excel against alternatives from AWS, Microsoft, startups, and open-source projects. However, organisations seeking to embed intelligence in business processes now have an extensive menu from which to choose.

In the unfolding generative AI revolution, Google has moved to lead from the cloud. With formidable resources and dedication to research excellence, Google Cloud aims to permeate tomorrow’s enterprise stack with artificial intelligence, even if it lacks splashy demos. Its latest announcements open an intriguing new chapter in the AI cloud wars.

For general consumers, Google’s focus on enterprise applications means near-term impact may be muted. Flashes of brilliance in demos hint at future possibilities, but Google deferred consumer products to double down on commercial opportunities.

This restrain contrasts with exuberant AI hype from competitors. However, Google’s tools could indirectly enhance consumer experiences as its generative AI permeates business processes behind the scenes, from improving customer service chatbots to accelerating drug discoveries.

While breathtaking consumer AI remains elusive for now, Google Cloud’s poker-faced lineup emphasises unlocking pragmatic intelligence for enterprises first. By tailoring solutions to industry needs, Google hopes generative AI’s democratising potential for users worldwide will scale sustainably. Google played the long game; it’s too early to judge results, but the strategy promises profound change ahead.

FAQs

What are foundation models?

Foundation models like PaLM 2 are large pre-trained language models that serve as a basis for developing specialised AI applications. They provide fundamental natural language understanding capabilities upon which more tailored solutions can be built.

How does Duet work?

Duet is an AI assistant service that allows users to make natural language requests, which it then fulfils by generating code, summarising documents, fixing bugs, or completing other programming and data science tasks.

What is the significance of these announcements for Google Cloud?

By announcing dozens of generative AI services, Google Cloud aims to underscore its strategy of bringing next-generation AI capabilities to organisations through customised enterprise solutions versus consumer products.