Summary
AI is integrated across Planhat to help automate tasks, generate content, and surface insights, making prospect- and customer-facing teams more efficient and strategic
Planhat supports AI steps in both template-based and custom Automations, allowing users to leverage LLMs (Planhat's managed or via custom integrations/connections) directly within processes
Customer/prospect interactions (emails, chats, calls) can be automatically analyzed for sentiment, helping teams track relationship health and prioritize effectively. Sentiment rolls up to Company, End User and User level for deep insights, and can be used in Health Scores, Automations and reporting (e.g. Dashboards). In addition to this classification, subject-matter categories (e.g. "Support") can also be automatically assigned
Our Writing Assistant and Conversation Summary streamline note-taking and review by generating clean summaries, action items, translations, and quick overviews of message threads
Usage of these AI tools (except individual connections to AI providers) consumes Planhat AI credits and follows region-specific data handling protocols
Who is this article for?
Anyone who would like an overview of AI functionality within Planhat
Series
AI in Planhat ⬅️ You are here
Article contents
Introduction
AI can be used across Planhat in different areas of the platform. Leveraging AI to help automate tasks, generate content and gain deep insights helps your team to become more efficient and more strategic when working with customers and prospects.
This article gives an overview of different ways of using AI in Planhat, as well as outline security and commercial considerations.
AI step in Automations
Planhat's Automations have native support for AI. In our App Center, you can find AI Automation Templates which can be added to your Planhat. These templates are pre-built tested use cases that are easy to get started with - it requires one click!
If you're using Custom Automations, you can also leverage our "Use AI" step within Automations, that allows you use LLMs directly within your Automations. For the AI steps, you can choose to use Planhat's managed LLMs or bring your own LLM through an AI connection.
Read more about our AI step in Automations.
MCP server
An MCP (model context protocol) server enable LLMs to securely access and operate on data sources. Planhat has developed an MCP for our API, read more about the details at our developer site here.
Conversation Sentiment and Category Classification
Customer/prospect conversations - emails, chats, and calls - contain insights that are often overlooked due to the manual effort required to analyze them.
Planhat now automates the analysis and classification of these interactions to save time and provide deeper understanding. Conversations are tagged by category (e.g., Support, Billing) and analyzed for sentiment (Positive, Neutral, Negative). The sentiment is rolled up into sentiment scores on Company, End User and User levels (models). This helps identify at-risk relationships, prioritize urgent issues, and drive smarter processes. Ultimately, it enhances visibility into customer health, team effectiveness, and emerging trends.
Read more about Conversation Sentiment and Category Classification.
Writing Assistant and Conversation Summary
Planhat's AI-powered Writing Assistant and Conversation Summary tools help you save time and focus on strategic tasks by streamlining meeting notes and conversation reviews. Writing Assistant transforms rough call notes into clear summaries, action items, helps you craft follow-up emails, and even translates or answers questions, making collaboration easier. Conversation Summary provides quick overviews of long email or chat threads, enabling teams to stay aligned and informed without reading through every message.
Read more about Writing Assistant and Conversation Summary.
Security considerations
The following AI features are powered by Planhat's connection to Vertex AI (Google Cloud's AI Platform), providing access to Gemini and Anthropic LLMs:
AI steps in Automations using Planhat managed LLMs
Conversation Sentiment and Classification
Conversation Summary
Writing Assistant
This means we (Planhat) have our own instance through Vertex AI, which lets us control data privacy and processing region. The input and output data is not available to third parties, and your data is not used to train models. If you're in the EU, your data is sent to instances in EU (adhering to data regulations in the region), or if you're in the US, you will use the US instance. Vertex AI does not store the requests or responses.
When you use the AI features listed above, a request is sent from the Planhat API to our own instance of Vertex AI, the request is processed, and the response is sent back to our API. Every step is encrypted through https.
Whether the response is automatically saved or not (in Planhat's database) depends on which feature you are using:
For AI steps in Automations, the response is shown in Automation logs for 30 days, and saved in our database only if you decide to save it.
For the conversation sentiment, the sentiment tag (Positive, Neutral, Negative) is saved in our database, as well as the rolled-up score on Company/End User/User level.
For the conversation classification, the category (e.g. Support, General Enquiry) is saved in our database.
For Writing Assistant, the response is only shown to you, and is not saved to our database unless you choose to save it
For Conversation Summary, once we receive the response, we save it to the Planhat database to be able to show it again (when you want to view it at a later date) without reprocessing it
Commercial considerations
When using the following AI features in Planhat, usage is charged through AI credits.
AI steps in Automations using Planhat managed LLMs
Conversation Sentiment and Classification
Conversation Summary
Writing Assistant
Read more about AI credits and permissions.
Machine Learning features
In addition to the native AI features in Planhat, there is also built-in functionality within Planhat that makes use of ML, rather than connecting to an LLM.
These are the "Deviations from Normal" Calculated Metric templates, and the "Relevance" End User system field, both of which you can read about here.