API examples
This page provides examples and practices for managing Langflow using the Langflow API.
The Langflow API's OpenAPI spec can be viewed and tested at your Langflow deployment's docs
endpoint.
For example, http://127.0.0.1:7860/docs
.
Export values
You might find it helpful to set the following environment variables in your terminal.
The examples in this guide use environment variables for these values.
- Export your Langflow URL in your terminal.
Langflow starts by default at
http://127.0.0.1:7860
.
_10export LANGFLOW_URL="http://127.0.0.1:7860"
- Export the
flow-id
in your terminal. Theflow-id
is found in the Publish pane or in the flow's URL.
_10export FLOW_ID="359cd752-07ea-46f2-9d3b-a4407ef618da"
- Export the
folder-id
in your terminal. To find your folder ID, call the Langflow /api/v1/folders/ endpoint for a list of folders.
- curl
- Result
_10curl -X GET \_10 "$LANGFLOW_URL/api/v1/folders/" \_10 -H "accept: application/json"
_10[_10 {_10 "name": "My Projects",_10 "description": "Manage your own projects. Download and upload folders.",_10 "id": "1415de42-8f01-4f36-bf34-539f23e47466",_10 "parent_id": null_10 }_10]
Export the folder-id
as an environment variable.
_10export FOLDER_ID="1415de42-8f01-4f36-bf34-539f23e47466"
- Export the Langflow API key as an environment variable. To create a Langflow API key, run the following command in the Langflow CLI.
- curl
- Result
_10langflow api-key
_10API Key Created Successfully:_10sk-...
Export the generated API key as an environment variable.
_10export LANGFLOW_API_KEY="sk-..."
Base
Use the base Langflow API to run your flow and retrieve configuration information.
Get all components
This operation returns a dictionary of all Langflow components.
- curl
- Result
_10curl -X GET \_10 "$LANGFLOW_URL/api/v1/all" \_10 -H "accept: application/json"
_10A dictionary of all Langflow components.
Run flow
Execute a specified flow by ID or name. The flow is executed as a batch, but LLM responses can be streamed.
This example runs a Basic Prompting flow with a given flow_id
and passes a JSON object as the input value.
The parameters are passed in the request body. In this example, the values are the default values.
- curl
- Result
_11curl -X POST \_11 "$LANGFLOW_URL/api/v1/run/$FLOW_ID" \_11 -H "Content-Type: application/json" \_11 -d '{_11 "input_value": "Tell me about something interesting!",_11 "session_id": "chat-123",_11 "input_type": "chat",_11 "output_type": "chat",_11 "output_component": "",_11 "tweaks": null_11 }'
_29{_29 "session_id": "chat-123",_29 "outputs": [{_29 "inputs": {_29 "input_value": "Tell me about something interesting!"_29 },_29 "outputs": [{_29 "results": {_29 "message": {_29 "text": "Sure! Have you ever heard of the phenomenon known as \"bioluminescence\"? It's a fascinating natural occurrence where living organisms produce and emit light. This ability is found in various species, including certain types of jellyfish, fireflies, and deep-sea creatures like anglerfish.\n\nBioluminescence occurs through a chemical reaction in which a light-emitting molecule called luciferin reacts with oxygen, catalyzed by an enzyme called luciferase. The result is a beautiful glow that can serve various purposes, such as attracting mates, deterring predators, or luring prey.\n\nOne of the most stunning displays of bioluminescence can be seen in the ocean, where certain plankton emit light when disturbed, creating a mesmerizing blue glow in the water. This phenomenon is often referred to as \"sea sparkle\" and can be seen in coastal areas around the world.\n\nBioluminescence not only captivates our imagination but also has practical applications in science and medicine, including the development of biosensors and imaging techniques. It's a remarkable example of nature's creativity and complexity!",_29 "sender": "Machine",_29 "sender_name": "AI",_29 "session_id": "chat-123",_29 "timestamp": "2025-03-03T17:17:37+00:00",_29 "flow_id": "d2bbd92b-187e-4c84-b2d4-5df365704201",_29 "properties": {_29 "source": {_29 "id": "OpenAIModel-d1wOZ",_29 "display_name": "OpenAI",_29 "source": "gpt-4o-mini"_29 },_29 "icon": "OpenAI"_29 },_29 "component_id": "ChatOutput-ylMzN"_29 }_29 }_29 }]_29 }]_29}
To stream LLM token responses, append the ?stream=true
query parameter to the request. LLM chat responses are streamed back as token
events until the end
event closes the connection.
- curl
- Result
_10curl -X POST \_10 "$LANGFLOW_URL/api/v1/run/$FLOW_ID?stream=true" \_10 -H "accept: application/json" \_10 -H "Content-Type: application/json" \_10 -d '{_10 "message": "Tell me something interesting!",_10 "session_id": "chat-123"_10 }'
_19{"event": "add_message", "data": {"timestamp": "2025-03-03T17:20:18", "sender": "User", "sender_name": "User", "session_id": "chat-123", "text": "Tell me about something interesting!", "files": [], "error": false, "edit": false, "properties": {"text_color": "", "background_color": "", "edited": false, "source": {"id": null, "display_name": null, "source": null}, "icon": "", "allow_markdown": false, "positive_feedback": null, "state": "complete", "targets": []}, "category": "message", "content_blocks": [], "id": "0103a21b-ebf7-4c02-9d72-017fb297f812", "flow_id": "d2bbd92b-187e-4c84-b2d4-5df365704201"}}_19_19{"event": "add_message", "data": {"timestamp": "2025-03-03T17:20:18", "sender": "Machine", "sender_name": "AI", "session_id": "chat-123", "text": "", "files": [], "error": false, "edit": false, "properties": {"text_color": "", "background_color": "", "edited": false, "source": {"id": "OpenAIModel-d1wOZ", "display_name": "OpenAI", "source": "gpt-4o-mini"}, "icon": "OpenAI", "allow_markdown": false, "positive_feedback": null, "state": "complete", "targets": []}, "category": "message", "content_blocks": [], "id": "27b66789-e673-4c65-9e81-021752925161", "flow_id": "d2bbd92b-187e-4c84-b2d4-5df365704201"}}_19_19{"event": "token", "data": {"chunk": " Have", "id": "27b66789-e673-4c65-9e81-021752925161", "timestamp": "2025-03-03 17:20:18 UTC"}}_19_19{"event": "token", "data": {"chunk": " you", "id": "27b66789-e673-4c65-9e81-021752925161", "timestamp": "2025-03-03 17:20:18 UTC"}}_19_19{"event": "token", "data": {"chunk": " ever", "id": "27b66789-e673-4c65-9e81-021752925161", "timestamp": "2025-03-03 17:20:18 UTC"}}_19_19{"event": "token", "data": {"chunk": " heard", "id": "27b66789-e673-4c65-9e81-021752925161", "timestamp": "2025-03-03 17:20:18 UTC"}}_19_19{"event": "token", "data": {"chunk": " of", "id": "27b66789-e673-4c65-9e81-021752925161", "timestamp": "2025-03-03 17:20:18 UTC"}}_19_19{"event": "token", "data": {"chunk": " the", "id": "27b66789-e673-4c65-9e81-021752925161", "timestamp": "2025-03-03 17:20:18 UTC"}}_19_19{"event": "token", "data": {"chunk": " phenomenon", "id": "27b66789-e673-4c65-9e81-021752925161", "timestamp": "2025-03-03 17:20:18 UTC"}}_19_19{"event": "end", "data": {"result": {"session_id": "chat-123", "message": "Sure! Have you ever heard of the phenomenon known as \"bioluminescence\"?..."}}}
This result is abbreviated, but illustrates where the end
event completes the LLM's token streaming response.
Run endpoint headers and parameters
Parameters can be passed to the /run
endpoint in three ways:
- URL path:
flow_id
as part of the endpoint path - Query string:
stream
parameter in the URL - Request body: JSON object containing the remaining parameters
Headers
Header | Info | Example |
---|---|---|
Content-Type | Required. Specifies the JSON format. | "application/json" |
accept | Required. Specifies the response format. | "application/json" |
x-api-key | Optional. Required only if authentication is enabled. | "sk-..." |
Parameters
Parameter | Type | Info |
---|---|---|
flow_id | UUID/string | Required. Part of URL: /run/{flow_id} |
stream | boolean | Optional. Query parameter: /run/{flow_id}?stream=true |
input_value | string | Optional. JSON body field. Main input text/prompt. Default: null |
input_type | string | Optional. JSON body field. Input type ("chat" or "text"). Default: "chat" |
output_type | string | Optional. JSON body field. Output type ("chat", "any", "debug"). Default: "chat" |
output_component | string | Optional. JSON body field. Target component for output. Default: "" |
tweaks | object | Optional. JSON body field. Component adjustments. Default: null |
session_id | string | Optional. JSON body field. Conversation context ID. Default: null |
Example request
_17curl -X POST \_17 "http://$LANGFLOW_URL/api/v1/run/$FLOW_ID?stream=true" \_17 -H "Content-Type: application/json" \_17 -H "accept: application/json" \_17 -H "x-api-key: sk-..." \_17 -d '{_17 "input_value": "Tell me a story",_17 "input_type": "chat",_17 "output_type": "chat",_17 "output_component": "chat_output",_17 "session_id": "chat-123",_17 "tweaks": {_17 "component_id": {_17 "parameter_name": "value"_17 }_17 }_17 }'
Webhook run flow
The webhook endpoint triggers flow execution with an HTTP POST request.
When a Webhook component is added to the workspace, a new Webhook cURL tab becomes available in the API pane that contains an HTTP POST request for triggering the webhook component, similar to the call in this example.
To test the Webhook component in your flow, see the Webhook component.
- curl
- Result
_10curl -X POST \_10 "$LANGFLOW_URL/api/v1/webhook/$FLOW_ID" \_10 -H "Content-Type: application/json" \_10 -d '{"data": "example-data"}'
_10{_10 {"message":"Task started in the background","status":"in progress"}_10}
Process
This endpoint is deprecated. Use the /run
endpoint instead.
Predict
This endpoint is deprecated. Use the /run
endpoint instead.
Get task status
Get the status of a task.
- curl
- Result
_10curl -X GET \_10 "$LANGFLOW_URL/api/v1/task/TASK_ID" \_10 -H "accept: application/json"
_10{_10 "status": "Task status",_10 "result": "Task result if completed"_10}