Skip to main content

3 posts tagged with "Claude"

View All Tags

· 6 min read
DahnM20

Most platforms that let you use Claude either charge a markup on top of Anthropic's pricing or lock you into their managed model access. AI-Flow takes the opposite approach: you bring your own Anthropic API key, it gets stored in an encrypted key store, and every Claude node in every workflow draws from it automatically. You pay Anthropic directly at their standard rates — nothing extra on the model cost side.

This article covers how to set it up, what the Claude node actually does, and a practical workflow to run once everything is connected.

Why BYOK matters for Claude

When you use Claude through a third-party platform without BYOK, you're often paying a percentage on top of Anthropic's input/output token rates. For light use, this is barely noticeable. For any workflow that runs frequently — summarization pipelines, classification at scale, document processing — the markup compounds quickly.

With a BYOK setup in AI-Flow, the cost for a Claude call is exactly what Anthropic charges for that model and token count. The platform fee covers AI-Flow itself, not a percentage of your model usage.

There's also a control argument: your key, your usage data. The call goes from AI-Flow's backend directly to the Anthropic API using your key, under your account.

Step 1 — Get your Anthropic API key

If you don't have one yet, create an account at console.anthropic.com, navigate to API Keys, and create a new key. Copy it — you'll paste it into AI-Flow in the next step.

Make sure your Anthropic account has credits or a billing method set up. The key won't work for API calls without it, regardless of how it's configured in AI-Flow.

Step 2 — Add the key to AI-Flow's key store

Open AI-Flow and click the settings icon (top right of the interface) to open the configuration panel. You'll see a Keys tab with input fields for each supported provider.

Paste your Anthropic API key into the Anthropic field and click Validate.

AI-Flow config panel with the Keys tab open

If you're logged into an AI-Flow account, the key is encrypted before being stored — it persists across browsers and sessions. If you're using AI-Flow without an account, the key is stored locally in your browser.

That's the entire setup. You don't configure the key per-node or per-workflow. Every Claude node in every canvas you create will automatically use it.

What the Claude node can do

Drop a Claude node on the canvas and open its settings. Here's what you can configure:

Model selection — Available models as of writing:

  • Claude 4.5 Haiku — fastest, lowest cost, good for classification and short tasks
  • Claude 4.5 Sonnet — balanced capability and speed
  • Claude 4.5 Opus — highest capability in the 4.5 line
  • Claude 4.6 Sonnet (default) — current recommended choice for most tasks
  • Claude 4.6 Opus — highest capability overall

Inputs:

  • Prompt — the main instruction or question. Has a connection handle so you can wire output from other nodes into it.
  • Context — optional additional data for Claude to reference (a document, scraped text, another model's output). Also has a handle.

Adaptive thinking — Enabled by default on Claude 4.6 models. It allows the model to think through complex problems before responding. You control the depth with an effort setting: low, medium (default for Sonnet 4.6), high, or max (for the hardest Opus 4.6 tasks).

Temperature — slider from 0 to 1. Lower values produce more deterministic output; higher values increase variation. Default is 1.

Output is streamed as it generates — you see the response building in real time beneath the node, rather than waiting for the full response.

Claude node on canvas with model selector open

A practical workflow: summarize and classify in one pass

Here's a simple pipeline that uses Claude to do two things at once — summarize a document and assign it a category — saving a round trip compared to running two separate prompts.

Step 1 — Text Input

Drop a Text Input node and paste in the document you want to process (an article, a support ticket, a report — whatever your use case requires).

Step 2 — Claude node

Connect the Text Input to the Context field of a Claude node. In the Prompt field, write:

You are a document analyst. Based on the context provided:
1. Write a 2-sentence summary.
2. Assign a single category from this list: Technology, Finance, Health, Legal, Other.

Format your response as:
Summary: <your summary>
Category: <category>

Set the model to Claude 4.6 Sonnet. Leave adaptive thinking on — it helps with instruction-following tasks like this.

Step 3 — Run

Hit Run. Claude reads the document from the context field, applies the prompt, and streams back a structured response. Results appear beneath the node as they stream in.

Canvas with Text Input connected to Claude node, output visible below

Step 4 — Extract the fields (optional)

If you want to use the summary or category in a downstream node, add an Extract Regex node or connect the output to a prompt in another node. For fully structured extraction, switch to the GPT Structured Output node instead — it enforces a JSON schema so the output is always machine-readable.

Using Claude across multiple workflows

Once the key is in the store, you can use Claude in any number of workflows without any additional setup. Build a summarization pipeline today, a classification workflow tomorrow, an image-description workflow using the context field to pass image URLs — the key is always available.

The same applies to your other provider keys (OpenAI, Replicate, Google). Each is stored once and shared across all nodes and all canvases. If you rotate your Anthropic key, update it in the key store once and all your workflows pick up the new key automatically.

Starting from a template

Rather than building from scratch, the AI-Flow templates library has pre-built Claude workflows covering summarization, content generation, and multi-step reasoning pipelines. Load one, add your key if you haven't already, and run it.

Try it

Add your Anthropic API key to the AI-Flow key store, drop a Claude node on the canvas, and run your first prompt. The free tier is available without a subscription — you only pay for what you send to the Anthropic API.

· 6 min read
DahnM20

Chaining Claude and Replicate models together normally means writing API clients for two different services, handling rate limits, serializing outputs from one into inputs for the next, and gluing it all together with Python or Node.js. It works, but it's tedious — and every time you want to tweak the pipeline, you're back in the code.

AI-Flow is a visual workflow builder built specifically for this kind of multi-model pipeline. You connect your own API keys, drop nodes onto a canvas, wire them together, and run. No boilerplate, no deployment headache. This article walks through a practical example: using Claude to write a prompt, then feeding that prompt into a Replicate image model.

Why combine Claude and Replicate?

Claude is a strong reasoning model — good at interpreting vague instructions, structuring text, and generating detailed, specific prompts. Replicate hosts hundreds of open-source image, video, and audio models that respond well to precise, descriptive inputs.

The combination is practical: Claude takes your rough idea and turns it into an optimized prompt; a Replicate model like FLUX or Stable Diffusion turns that prompt into an image. The quality difference between a vague prompt and a Claude-crafted one is significant, especially for generative image models.

The problem is that wiring this up with raw API calls is repetitive and fragile. AI-Flow removes that friction.

What you need

  • An AI-Flow account (free tier works)
  • An Anthropic API key (for Claude)
  • A Replicate API key (for image models)
  • Add them in AI-Flow Secure Store

You pay Anthropic and Replicate directly at their standard rates.

Building the workflow: step by step

Step 1 — Add a Text Input node

Open a new canvas and drag in a Text Input node. This is where you'll type your raw concept — something like "a fox reading a book in a rainy library, warm light". Keeping this as a separate node means you can re-run with different prompts without touching the rest of the workflow.

Text Input node on canvas

Step 2 — Add a Claude node and configure it

Drag in a Claude node. In the node settings:

  • Select your model (Claude 4.6 Sonnet is a good default for this task)
  • Set the system prompt to something like: "You are a prompt engineer for image generation models. Take the user's concept and rewrite it as a detailed, vivid image generation prompt. Output only the prompt, nothing else."

Connect the output of the Text Input node to the user message input of the Claude node.

Claude node configuration

Step 3 — Add a Replicate node

Drag in a Replicate node. Click the model selector and search for the image model you want to use — Nano Banana 2, Flux Max, or any other image model from the Replicate catalog.

Map the prompt input of the Replicate node to the output of the Claude node. Most image models on Replicate accept a prompt field — the node interface surfaces the model's input schema so you can map fields directly.

Replicate node connected to Claude output

Step 5 — Run the workflow

Hit Run. AI-Flow executes the nodes in sequence: your raw concept goes into Claude, Claude returns an optimized prompt, that prompt goes to Replicate, and the generated image appears in the output node. The whole chain runs without you writing a single line of code.

To iterate, change the text in the Text Input node and run again. Claude will generate a different prompt, and Replicate will produce a new image.

Extending the workflow

Once the basic chain is working, there are natural extensions:

Add a second Replicate model. You could run the same Claude-generated prompt through two different image models side by side — connect the Claude output to two separate Replicate nodes to compare results.

Add conditional logic. AI-Flow supports branching nodes. If Claude's output contains certain keywords, you can route the prompt to a different model or add a negative prompt node before it reaches Replicate.

Expose it as an API. Use AI-Flow's API Builder to wrap the workflow as a REST endpoint. You POST a concept, the pipeline runs, and you get back the image URL — useful for integrating into your own app without maintaining the pipeline code yourself.

Use a template as a starting point. The AI-Flow templates library has ready-made workflows for image generation pipelines. Loading one and swapping in your own prompts and models is faster than building from scratch.

What this approach avoids

The straightforward alternative to this is writing two API clients and a small orchestration script. That works, but you're maintaining code, handling errors, managing keys in environment variables, and redeploying whenever the pipeline changes. For a pipeline you'll iterate on frequently — different models, different prompts, different output formats — the overhead adds up.

AI-Flow keeps the pipeline state in the canvas. Changing a model is a dropdown selection. Changing the prompt structure is editing a text field. There's no diff to review, no test suite to update, no deployment to trigger.

Try it

If you have Anthropic and Replicate API keys and want to run this kind of pipeline today, open AI-Flow and start with a blank canvas or pick a template from the templates page. The free tier is enough to build and run this workflow.

· 2 min read
DahnM20

Introducing Claude 3 from Anthropic in AI-FLOW v0.7.0

Following user feedback, AI-FLOW has now integrated Claude from Anthropic, an upgrade in our text generation toolkit.

Example using Claude

Get Started

The Claude node is quite similar to the GPT one. You can add a textual prompt and additional context for Claude.

Example using Claude

The only difference is that the Claude node is a bit more customizable. You'll have access to:

  • temperature : Use a temperature closer to 0.0 for analytical/multiple-choice tasks and closer to 1.0 for creative and generative tasks.
  • max_tokens : The maximum number of tokens to generate before stopping.

Here are the 3 models descriptions according to Anthropic documentation :

  • Claude 3 Opus: Most powerful model, delivering state-of-the-art performance on highly complex tasks and demonstrating fluency and human-like understanding
  • Claude 3 Sonnet: Most balanced model between intelligence and speed, a great choice for enterprise workloads and scaled AI deployments
  • Claude 3 Haiku: Fastest and most compact model, designed for near-instant responsiveness and seamless AI experiences that mimic human interactions

Dive into a world of enhanced text creation with Claude from Anthropic on AI-FLOW. Experience the power of advanced AI-driven text generation. Try it now!