Skip to main content

"Use AI" steps in Automations

Use AI steps in Custom Automations to leverage LLMs

Julia Sommarlund avatar
Written by Julia Sommarlund
Updated today

Summary

  • "Use AI" is a type of step you can use when configuring Custom Automations

  • This AI step type allows you to connect to different LLMs, using your own external connection or using Planhat's connections to different models

  • The AI step takes a model, a connection and a prompt as input

  • If there is only one connection (one external, or only Planhat's connection) available, you don't have to choose the connection

  • AI steps are only available in Automations in upgraded Planhat (ws.planhat.com)

Who is this article for?

  • Planhat Users who are building Custom Automations for their organization and would like to leverage AI

Series

This article is part of a series on Automations:

And a series on AI in Planhat:


Article contents

This is a technical deep-dive article

Read on if you'd like to learn about "Use AI" steps - a type of step you can use when configuring Custom Automations. Ensure you read our article on Custom Automations before this one, so you are familiar with the context of where these AI steps can be used.

If you would simply like a general introduction to Automations, check out our overview article here.

🚀 Tip

If you would like to learn via different content styles, check out the "Additional resources" section at the bottom of this article for videos and more!


What are AI steps?

The "Use AI" step type allows you to integrate Large Language Models (LLMs) directly into your Custom Automations. Whether you're looking to analyze customer data, summarize content, or generate dynamic responses, the AI step gives you the flexibility to leverage AI where you need it most.

You can connect to models either through Planhat's managed LLMs: OpenAI's GPT, Anthropic's Claude, or Google's Gemini, or to any LLM you prefer by using external connections. Read about this in our article on setting up AI connections.

📌 Important to note

When using Planhat's managed LLM connections, usage is billed in credits. Read the article about AI credits here.


Why use AI steps?

Using LLMs in Automations can be used to generate written content for a range of use cases:

  • Prepare for a QBR by summarizing the state of the account, including any opportunities, objectives, open issues or support tickets, and combine with the notes from the last QBR

  • Rank Companies in your portfolio by analyzing how they fit your ICP (Ideal Customer Profile)

  • Summarize all open issues, support tickets and conversations to identify friction points for a customer

  • Define your weekly priorities based on your upcoming meetings this week

  • Run a cross-portfolio issue analysis to identify the most common issues or themes of issues across your portfolio

  • Identify your at-risk Companies across your entire portfolio, and provide you with a game plan with action points


How to set up AI steps

When adding an AI step to your Custom Automation, you'll configure three key inputs:

  1. "Provider" - Choose the Provider you'd like to use. You can choose between Planhat-managed LLMs, or your own connection (API key / Integration setup) to LLMs.

  2. "Model" – Choose the LLM model you want to use (e.g., GPT-4, Claude 3, etc.)

  3. "Prompt" – Write the prompt that will be sent to the model. You can insert dynamic references from the Automation context to personalize each response. Read more about dynamic references in our article here.

  4. The output of the AI step can be set to either Text (unstructured output) or JSON (structured output). The structured output can be used for e.g. branching-logic or to create or update data in Planhat or using webhooks.It's great for getting multiple outputs from the LLM. The structured output format is configured by pressing "Configure Schema", where multiple outputs can be set. Each output needs:

    1. Name and type, e.g. "Title" and "text" or "ARR" and "number". Name is not allowed to contain space " ".

    2. Description, which helps the LLM understand what this output is and what it will be used for

    3. If it's required or not

    Access the outputs in the next step by using dynamic references, for the example below this would be <<s-4Pp.Title>> and <<s-4Pp.ARR>>

    If you want a single-value list as output, use "List" option. If you want a multi-pick list value as output, use "Array" and "List" to input your values.

    Coming soon: connecting outputs to existing fields in Planhat, so when you want to create/update data you don't have to feed in the field formats! Then you don't need to keep the output format in sync with your existing Planhat field configuration, so added list values will automatically apply.



    As an example, if your AI Workflow analyzes call transcripts to auto-create opportunities, return "OpportunityIdentified", which is true/false and use that to branch the Workflow for either creating a new opportunity (using outputs such as "Title", "Description", "ARR", "Type".

  5. The AI step also has tools available, e.g. Websearch which allows the LLM to search the web to find answers. Tip: If you want to leverage the Websearch, give examples of trustworthy sources to help steer the LLM. E.g. if you're searching for news on your start-up technology customer, mention sources like Tech Crunch.

🚀 Tip

You can format data via replacement codes (dynamic references) - e.g. <<Step 2 | toMarkdown>>. This can be useful when sending data to an LLM, or taking data produced by an LLM and saving it elsewhere. You can read more about this here.


Prompt engineering

Writing prompts is an important part of using LLMs - the more explicit each task is defined, the better the response in terms of quality and relevance. Test iteratively!

There is a video on this topic in the resources list at the end of this article that you can refer to.

Inspiration: AI Automation Templates to Custom Automations

To see examples of AI prompts to give you some inspiration, you can open up your choice of AI Automation Template in the Apps Library ...

Click the image to view it enlarged

... and then click on the "Customize" button to convert the Template to a Custom Automation:

You can then click into the "Use AI" steps to view the details, such as in the example screenshot above.


Automation example - combining a "Use AI" step with a "Read File" step

📌 Important to note

At time of writing, the "Read File" functionality has not been released to all tenants, so you may not have this option available.

AI steps can be particularly useful when combined with other step types, such as "Read File".

For example, let's say that a PDF is uploaded to a Company record as an attachment via its Company Full-Page Profile (example shown below).

Click the image to view it enlarged

This could be a contract containing various key data points about the Company that we would like to save in fields on the Company record - e.g. number of "seats", the renewal date, any add-on products, etc. We can accomplish this with a Custom Automation.

The Automation is triggered when an attachment is created, associated with a Company. We then use a "Read File" step followed by a "Use AI" step to read the document and extract/process the information within it, and then save the key details to the Company in an "Update Company" step.

Below we show each of the parts of the Automation in turn.

You can click each of the images to view them enlarged.

"Attachment created" trigger

"Read File" step

Note that we use the replacement code (dynamic reference) <<object.path>> in this particular example, but you could also reference files via a file path or a URL within this step type. As you can read about here, "object" refers to the record in the trigger, which in this case is the attachment that was created.

"Use AI" step

Note that this uses the replacement code (dynamic reference) <<s-gft>> as that's the name of the "Read File" step in this particular example; remember to reference your specific step name if you make your own version of this Automation.

"Update Company" step

This step now references "s-rNN", the name of the "Use AI" step in our Automation; change the reference to your own step name in your version.


Troubleshooting AI steps

There is a tutorial on this topic in the resources list at the end of this article that you can refer to.

In addition, below we summarize some typical error messages you may find in Automation logs, detailing the message, error object, and an explanation of what each error represents. As a prerequisite, you can read about the Automation logs and general troubleshooting here.


1. Rate Limit Reached (429 – OpenAI GPT-5)

Message

Rate limit reached for gpt-5... Limit 500000, Used 500000, Requested 68641. Please try again in 8.236s.

Error Object Highlights

  • status: 429

  • code: rate_limit_exceeded

Detail

This error occurs when the task worker sends a request but the organization has already hit the maximum token throughput allowed for GPT-5. The model rejects further requests until the cooldown window passes.


2. SyntaxError – Unexpected End of JSON Input

Message

Unexpected end of JSON input

Error Object Highlights

  • name: SyntaxError

  • Stack trace points to JSON.parse failing.

Detail

The model returned a response that was not valid JSON or was truncated. The parser attempted to parse incomplete JSON, causing a syntax error.


3. SyntaxError – Invalid JSON (Gemini Vertex)

Message

Unexpected token 'A', "As a Stake"... is not valid JSON

Error Object Highlights

  • name: SyntaxError

  • Came from Gemini response

  • The response began with regular text instead of valid JSON.

Detail

This happens when the LLM does not follow the enforced JSON structure. The response body started with natural language, causing JSON.parse to fail.


4. Context Window Exceeded (400 – OpenAI)

Message

Your input exceeds the context window of this model. Please adjust your input and try again.

Error Object Highlights

  • status: 400

  • code: context_length_exceeded

Detail

The prompt sent to the model was bigger than the maximum number of tokens allowed. The request must be shortened or chunked.


5. Model Not Found / Access Denied (403 – OpenAI)

Message

Project ... does not have access to model gpt-4.1

Error Object Highlights

  • status: 403

  • code: model_not_found

  • The project lacks permission to use the specified model.

Detail

The automation attempted to use a model that the project is not entitled to. The fix is to update credentials, organization access, or switch to an allowed model.


6. Invalid LLM Response: MAX_TOKENS (Gemini Vertex)

Message

Invalid LLM response: MAX_TOKENS

Error Object Highlights

  • Likely indicates that the model exceeded the configured max_tokens constraint.

Detail

This error is thrown when the model attempts to generate more tokens than permitted by the configuration. It is usually caused by missing maxTokens configuration or the model ignoring instructions.


7. Resource Exhausted (429 – Gemini Vertex)

Message

Resource exhausted. Please try again later.

Error Object Highlights

  • code: 429

  • status: RESOURCE_EXHAUSTED

Detail

This occurs when the Vertex AI quota is exceeded—either rate-limit or capacity-related. The system should retry with backoff.


Additional resources

We have a wide range of resources on AI steps, to help you create and troubleshoot your AI Automations.

You can click each of the screenshots to open the resources.


Video: AI Step in Automations

Video: Connect a Planhat-managed LLM or bring your own

Video: Prompt Design in Automations

Guide: Build an AI Custom Automation

More specific versions of the above:

Tutorial: AI Automation troubleshooting

Did this answer your question?