AI & LLM · Glossary

What is LLM API?

Definition: A programmatic interface to a large language model (such as OpenAI's GPT or Anthropic's Claude) that accepts text prompts and returns generated text, enabling automated AI-powered workflows.

LLM APIs let you call AI models programmatically. You send a prompt via HTTP request. You get back generated text. This is how GTM Engineers embed AI into automated pipelines instead of copy-pasting from ChatGPT.

The two main providers: OpenAI (GPT-4, GPT-3.5) and Anthropic (Claude). Both charge per token (roughly per word). GPT-4 costs $0.03-$0.06 per 1K tokens. Claude 3 Opus costs similar. Cheaper models (GPT-3.5, Claude Haiku) cost 10-20x less and work fine for simpler tasks like email personalization and data categorization.

In a GTM workflow: n8n calls the OpenAI API to classify leads by industry based on their company description. Clay calls Claude's API to generate personalized email opening lines. A Python script calls GPT-4 to summarize a prospect's LinkedIn profile into 3 bullet points for the AE's pre-call prep. Each of these is an API call with a prompt, input data, and a structured response.

Practical tips: use the cheapest model that produces acceptable output (GPT-3.5 for classification, GPT-4 for writing). Set temperature to 0.3-0.5 for consistent output (higher temperature = more creative but less predictable). Include examples in your prompt (few-shot prompting) for better results. Parse the response programmatically (ask for JSON output) instead of trying to extract data from freeform text.

Get the Weekly Pulse

Salary shifts, tool intel, and job market data for GTM Engineers. Get weekly GTM Engineering terms and tool intel delivered to your inbox.