LLM Node

The LLM (Large Language Model) node allows you to run prompts directly inside your Workflow using one of the available AI models. This makes it possible to generate text, transform data, analyze inputs, or create structured outputs automatically.


🛠️ How It Works

  1. You select a provider (e.g., Zaia).

  2. You choose a model (e.g., Claude, GPT, etc.).

  3. You define the output type → text or JSON.

  4. You set the temperature → controls balance between precision (deterministic answers) and creativity (diverse answers).

  5. You write a prompt that the LLM will execute.

The result can then be passed to other nodes in the Workflow.


⚙️ Configuration Options

  • Provider → The service powering the LLM (e.g., Zaia).

  • Model → Which model to use (e.g., claude-sonnet-4.5).

  • Output type

    • Text: free-form answer.

    • JSON: structured response (ideal for automation).

  • Temperature

    • Low = deterministic (e.g., 0.1 → precise answers).

    • High = creative (e.g., 0.8 → more variation).

  • Prompt → The instruction/query you want the LLM to process.


📌 Example Use Cases

  • Summarization: Input long text, output a short summary.

  • Data transformation: Convert unstructured text into JSON for APIs.

  • Creative generation: Ask for product descriptions, social media posts, or marketing copies.

  • Decision support: Evaluate conditions and suggest next steps.


🚀 Example

Prompt:

Extract the email and phone number from the following text and return in JSON:  

"Hi, my name is John. You can reach me at [email protected] or call me at +1 555 123 4567."  

Output (JSON):

{
  "email": "[email protected]",
  "phone": "+1 555 123 4567"
}

Tip: Always be explicit in your prompts about the format you expect (e.g., JSON, bullet points, step-by-step instructions).

Last updated