LangChain / LangGraph 스트리밍 스키마 정리

TMT

LangChain과 LangGraph의 stream 결과는 stream_mode 설정subgraphs 옵션에 따라 출력 스키마가 달라집니다.

LangGraph에서는 state 구조에 맞추어 스트림이 출력되기 때문에 약간의 차이가 있으나, 기본 원리는 동일합니다.

stream_mode: str

stream_mode에는 "values", "updates", "custom", "messages", "debug" 총 5개가 존재합니다.

이 중 자주 사용하는 "messages""updates" 모드의 스키마를 살펴보겠습니다.

1. stream_mode == "messages"

LLM이 호출된 모든 그래프 노드에서 2-튜플 (토큰, 메타데이터) 형태로 스트리밍합니다.

(AIMessageChunk | AIMessage | ToolMessage | HumanMessage | SystemMessage, metadata_dict)
  • 첫 번째 요소: LangChain 메시지 객체 (주로 AIMessageChunk, AIMessage, ToolMessage). 간헐적으로 HumanMessageSystemMessage도 나올 수 있음.
  • 두 번째 요소: LangGraph에서 관리하는 메타데이터 (노드명, 체크포인트, Provider/Model 정보 등).

예시 코드

stream = agent.stream(
    {"messages": [HumanMessage(content="What's the weather in Tokyo?")]},
    stream_mode="messages",
)

for token, metadata in stream:
    print("token:", token)
    print("metadata:", metadata)

출력 예시

출력 예시
token:
content=[] additional_kwargs={} response_metadata={} id='run--14d47f46-eb39-4e12-a96e-a22c3e7d12cc'
metadata:
{'langgraph_step': 1, 'langgraph_node': 'agent', 'langgraph_triggers': ('branch:to:agent',), 'langgraph_path': ('__pregel_pull', 'agent'), 'langgraph_checkpoint_ns': 'agent:25459b9d-644c-e6d0-fde6-07ad361d7d12', 'checkpoint_ns': 'agent:25459b9d-644c-e6d0-fde6-07ad361d7d12', 'ls_provider': 'amazon_bedrock', 'ls_model_name': 'us.anthropic.claude-sonnet-4-20250514-v1:0', 'ls_model_type': 'chat'}

token:
content=[{'type': 'text', 'text': 'I', 'index': 0}] additional_kwargs={} response_metadata={} id='run--14d47f46-eb39-4e12-a96e-a22c3e7d12cc'
metadata:
{'langgraph_step': 1, 'langgraph_node': 'agent', 'langgraph_triggers': ('branch:to:agent',), 'langgraph_path': ('__pregel_pull', 'agent'), 'langgraph_checkpoint_ns': 'agent:25459b9d-644c-e6d0-fde6-07ad361d7d12', 'checkpoint_ns': 'agent:25459b9d-644c-e6d0-fde6-07ad361d7d12', 'ls_provider': 'amazon_bedrock', 'ls_model_name': 'us.anthropic.claude-sonnet-4-20250514-v1:0', 'ls_model_type': 'chat'}

token:
content=[{'type': 'text', 'text': "'ll", 'index': 0}] additional_kwargs={} response_metadata={} id='run--14d47f46-eb39-4e12-a96e-a22c3e7d12cc'
metadata:
{'langgraph_step': 1, 'langgraph_node': 'agent', 'langgraph_triggers': ('branch:to:agent',), 'langgraph_path': ('__pregel_pull', 'agent'), 'langgraph_checkpoint_ns': 'agent:25459b9d-644c-e6d0-fde6-07ad361d7d12', 'checkpoint_ns': 'agent:25459b9d-644c-e6d0-fde6-07ad361d7d12', 'ls_provider': 'amazon_bedrock', 'ls_model_name': 'us.anthropic.claude-sonnet-4-20250514-v1:0', 'ls_model_type': 'chat'}

token:
content=[{'type': 'text', 'text': ' check', 'index': 0}] additional_kwargs={} response_metadata={} id='run--14d47f46-eb39-4e12-a96e-a22c3e7d12cc'
metadata:
{'langgraph_step': 1, 'langgraph_node': 'agent', 'langgraph_triggers': ('branch:to:agent',), 'langgraph_path': ('__pregel_pull', 'agent'), 'langgraph_checkpoint_ns': 'agent:25459b9d-644c-e6d0-fde6-07ad361d7d12', 'checkpoint_ns': 'agent:25459b9d-644c-e6d0-fde6-07ad361d7d12', 'ls_provider': 'amazon_bedrock', 'ls_model_name': 'us.anthropic.claude-sonnet-4-20250514-v1:0', 'ls_model_type': 'chat'}

token:
content=[{'type': 'text', 'text': ' the weather in', 'index': 0}] additional_kwargs={} response_metadata={} id='run--14d47f46-eb39-4e12-a96e-a22c3e7d12cc'
metadata:
{'langgraph_step': 1, 'langgraph_node': 'agent', 'langgraph_triggers': ('branch:to:agent',), 'langgraph_path': ('__pregel_pull', 'agent'), 'langgraph_checkpoint_ns': 'agent:25459b9d-644c-e6d0-fde6-07ad361d7d12', 'checkpoint_ns': 'agent:25459b9d-644c-e6d0-fde6-07ad361d7d12', 'ls_provider': 'amazon_bedrock', 'ls_model_name': 'us.anthropic.claude-sonnet-4-20250514-v1:0', 'ls_model_type': 'chat'}

token:
content=[{'type': 'text', 'text': ' Tokyo for', 'index': 0}] additional_kwargs={} response_metadata={} id='run--14d47f46-eb39-4e12-a96e-a22c3e7d12cc'
metadata:
{'langgraph_step': 1, 'langgraph_node': 'agent', 'langgraph_triggers': ('branch:to:agent',), 'langgraph_path': ('__pregel_pull', 'agent'), 'langgraph_checkpoint_ns': 'agent:25459b9d-644c-e6d0-fde6-07ad361d7d12', 'checkpoint_ns': 'agent:25459b9d-644c-e6d0-fde6-07ad361d7d12', 'ls_provider': 'amazon_bedrock', 'ls_model_name': 'us.anthropic.claude-sonnet-4-20250514-v1:0', 'ls_model_type': 'chat'}

token:
content=[{'type': 'text', 'text': ' you.', 'index': 0}] additional_kwargs={} response_metadata={} id='run--14d47f46-eb39-4e12-a96e-a22c3e7d12cc'
metadata:
{'langgraph_step': 1, 'langgraph_node': 'agent', 'langgraph_triggers': ('branch:to:agent',), 'langgraph_path': ('__pregel_pull', 'agent'), 'langgraph_checkpoint_ns': 'agent:25459b9d-644c-e6d0-fde6-07ad361d7d12', 'checkpoint_ns': 'agent:25459b9d-644c-e6d0-fde6-07ad361d7d12', 'ls_provider': 'amazon_bedrock', 'ls_model_name': 'us.anthropic.claude-sonnet-4-20250514-v1:0', 'ls_model_type': 'chat'}

token:
content=[{'type': 'tool_use', 'id': 'toolu_bdrk_0167x8mfVNABv6Ys7egZ1rT6', 'name': 'get_weather', 'input': {}, 'index': 1}] additional_kwargs={} response_metadata={} id='run--14d47f46-eb39-4e12-a96e-a22c3e7d12cc' tool_calls=[{'name': 'get_weather', 'args': {}, 'id': 'toolu_bdrk_0167x8mfVNABv6Ys7egZ1rT6', 'type': 'tool_call'}] tool_call_chunks=[{'name': 'get_weather', 'args': '', 'id': 'toolu_bdrk_0167x8mfVNABv6Ys7egZ1rT6', 'index': 1, 'type': 'tool_call_chunk'}]
metadata:
{'langgraph_step': 1, 'langgraph_node': 'agent', 'langgraph_triggers': ('branch:to:agent',), 'langgraph_path': ('__pregel_pull', 'agent'), 'langgraph_checkpoint_ns': 'agent:25459b9d-644c-e6d0-fde6-07ad361d7d12', 'checkpoint_ns': 'agent:25459b9d-644c-e6d0-fde6-07ad361d7d12', 'ls_provider': 'amazon_bedrock', 'ls_model_name': 'us.anthropic.claude-sonnet-4-20250514-v1:0', 'ls_model_type': 'chat'}

token:
content=[{'type': 'tool_use', 'partial_json': '', 'index': 1}] additional_kwargs={} response_metadata={} id='run--14d47f46-eb39-4e12-a96e-a22c3e7d12cc' tool_calls=[{'name': '', 'args': {}, 'id': None, 'type': 'tool_call'}] tool_call_chunks=[{'name': None, 'args': '', 'id': None, 'index': 1, 'type': 'tool_call_chunk'}]
metadata:
{'langgraph_step': 1, 'langgraph_node': 'agent', 'langgraph_triggers': ('branch:to:agent',), 'langgraph_path': ('__pregel_pull', 'agent'), 'langgraph_checkpoint_ns': 'agent:25459b9d-644c-e6d0-fde6-07ad361d7d12', 'checkpoint_ns': 'agent:25459b9d-644c-e6d0-fde6-07ad361d7d12', 'ls_provider': 'amazon_bedrock', 'ls_model_name': 'us.anthropic.claude-sonnet-4-20250514-v1:0', 'ls_model_type': 'chat'}

token:
content=[{'type': 'tool_use', 'partial_json': '{"ci', 'index': 1}] additional_kwargs={} response_metadata={} id='run--14d47f46-eb39-4e12-a96e-a22c3e7d12cc' tool_calls=[{'name': '', 'args': {}, 'id': None, 'type': 'tool_call'}] tool_call_chunks=[{'name': None, 'args': '{"ci', 'id': None, 'index': 1, 'type': 'tool_call_chunk'}]
metadata:
{'langgraph_step': 1, 'langgraph_node': 'agent', 'langgraph_triggers': ('branch:to:agent',), 'langgraph_path': ('__pregel_pull', 'agent'), 'langgraph_checkpoint_ns': 'agent:25459b9d-644c-e6d0-fde6-07ad361d7d12', 'checkpoint_ns': 'agent:25459b9d-644c-e6d0-fde6-07ad361d7d12', 'ls_provider': 'amazon_bedrock', 'ls_model_name': 'us.anthropic.claude-sonnet-4-20250514-v1:0', 'ls_model_type': 'chat'}

token:
content=[{'type': 'tool_use', 'partial_json': 'ty": "', 'index': 1}] additional_kwargs={} response_metadata={} id='run--14d47f46-eb39-4e12-a96e-a22c3e7d12cc' invalid_tool_calls=[{'name': None, 'args': 'ty": "', 'id': None, 'error': None, 'type': 'invalid_tool_call'}] tool_call_chunks=[{'name': None, 'args': 'ty": "', 'id': None, 'index': 1, 'type': 'tool_call_chunk'}]
metadata:
{'langgraph_step': 1, 'langgraph_node': 'agent', 'langgraph_triggers': ('branch:to:agent',), 'langgraph_path': ('__pregel_pull', 'agent'), 'langgraph_checkpoint_ns': 'agent:25459b9d-644c-e6d0-fde6-07ad361d7d12', 'checkpoint_ns': 'agent:25459b9d-644c-e6d0-fde6-07ad361d7d12', 'ls_provider': 'amazon_bedrock', 'ls_model_name': 'us.anthropic.claude-sonnet-4-20250514-v1:0', 'ls_model_type': 'chat'}

token:
content=[{'type': 'tool_use', 'partial_json': 'Tokyo"}', 'index': 1}] additional_kwargs={} response_metadata={} id='run--14d47f46-eb39-4e12-a96e-a22c3e7d12cc' invalid_tool_calls=[{'name': None, 'args': 'Tokyo"}', 'id': None, 'error': None, 'type': 'invalid_tool_call'}] tool_call_chunks=[{'name': None, 'args': 'Tokyo"}', 'id': None, 'index': 1, 'type': 'tool_call_chunk'}]
metadata:
{'langgraph_step': 1, 'langgraph_node': 'agent', 'langgraph_triggers': ('branch:to:agent',), 'langgraph_path': ('__pregel_pull', 'agent'), 'langgraph_checkpoint_ns': 'agent:25459b9d-644c-e6d0-fde6-07ad361d7d12', 'checkpoint_ns': 'agent:25459b9d-644c-e6d0-fde6-07ad361d7d12', 'ls_provider': 'amazon_bedrock', 'ls_model_name': 'us.anthropic.claude-sonnet-4-20250514-v1:0', 'ls_model_type': 'chat'}

token:
content='' additional_kwargs={} response_metadata={'stop_reason': 'tool_use', 'stop_sequence': None} id='run--14d47f46-eb39-4e12-a96e-a22c3e7d12cc'
metadata:
{'langgraph_step': 1, 'langgraph_node': 'agent', 'langgraph_triggers': ('branch:to:agent',), 'langgraph_path': ('__pregel_pull', 'agent'), 'langgraph_checkpoint_ns': 'agent:25459b9d-644c-e6d0-fde6-07ad361d7d12', 'checkpoint_ns': 'agent:25459b9d-644c-e6d0-fde6-07ad361d7d12', 'ls_provider': 'amazon_bedrock', 'ls_model_name': 'us.anthropic.claude-sonnet-4-20250514-v1:0', 'ls_model_type': 'chat'}

token:
content='' additional_kwargs={} response_metadata={'model_name': 'us.anthropic.claude-sonnet-4-20250514-v1:0'} id='run--14d47f46-eb39-4e12-a96e-a22c3e7d12cc' usage_metadata={'input_tokens': 381, 'output_tokens': 64, 'total_tokens': 445, 'input_token_details': {'cache_creation': 0, 'cache_read': 0}}
metadata:
{'langgraph_step': 1, 'langgraph_node': 'agent', 'langgraph_triggers': ('branch:to:agent',), 'langgraph_path': ('__pregel_pull', 'agent'), 'langgraph_checkpoint_ns': 'agent:25459b9d-644c-e6d0-fde6-07ad361d7d12', 'checkpoint_ns': 'agent:25459b9d-644c-e6d0-fde6-07ad361d7d12', 'ls_provider': 'amazon_bedrock', 'ls_model_name': 'us.anthropic.claude-sonnet-4-20250514-v1:0', 'ls_model_type': 'chat'}

token:
content='The weather in Tokyo is sunny.' name='get_weather' id='f502f671-71f0-420c-9b6e-68db28cda97b' tool_call_id='toolu_bdrk_0167x8mfVNABv6Ys7egZ1rT6'
metadata:
{'langgraph_step': 2, 'langgraph_node': 'tools', 'langgraph_triggers': ('__pregel_push',), 'langgraph_path': ('__pregel_push', 0, False), 'langgraph_checkpoint_ns': 'tools:4e9f249b-7556-a1e4-d71d-5bc5812fa840'}

token:
content=[] additional_kwargs={} response_metadata={} id='run--ed04d1cf-cd21-49ac-8b88-a6b234cb5066'
metadata:
{'langgraph_step': 3, 'langgraph_node': 'agent', 'langgraph_triggers': ('branch:to:agent',), 'langgraph_path': ('__pregel_pull', 'agent'), 'langgraph_checkpoint_ns': 'agent:e1ba2241-eba6-362f-f25c-f719c3c66593', 'checkpoint_ns': 'agent:e1ba2241-eba6-362f-f25c-f719c3c66593', 'ls_provider': 'amazon_bedrock', 'ls_model_name': 'us.anthropic.claude-sonnet-4-20250514-v1:0', 'ls_model_type': 'chat'}

token:
content=[{'type': 'text', 'text': 'The weather in Tokyo', 'index': 0}] additional_kwargs={} response_metadata={} id='run--ed04d1cf-cd21-49ac-8b88-a6b234cb5066'
metadata:
{'langgraph_step': 3, 'langgraph_node': 'agent', 'langgraph_triggers': ('branch:to:agent',), 'langgraph_path': ('__pregel_pull', 'agent'), 'langgraph_checkpoint_ns': 'agent:e1ba2241-eba6-362f-f25c-f719c3c66593', 'checkpoint_ns': 'agent:e1ba2241-eba6-362f-f25c-f719c3c66593', 'ls_provider': 'amazon_bedrock', 'ls_model_name': 'us.anthropic.claude-sonnet-4-20250514-v1:0', 'ls_model_type': 'chat'}

token:
content=[{'type': 'text', 'text': ' is currently', 'index': 0}] additional_kwargs={} response_metadata={} id='run--ed04d1cf-cd21-49ac-8b88-a6b234cb5066'
metadata:
{'langgraph_step': 3, 'langgraph_node': 'agent', 'langgraph_triggers': ('branch:to:agent',), 'langgraph_path': ('__pregel_pull', 'agent'), 'langgraph_checkpoint_ns': 'agent:e1ba2241-eba6-362f-f25c-f719c3c66593', 'checkpoint_ns': 'agent:e1ba2241-eba6-362f-f25c-f719c3c66593', 'ls_provider': 'amazon_bedrock', 'ls_model_name': 'us.anthropic.claude-sonnet-4-20250514-v1:0', 'ls_model_type': 'chat'}

token:
content=[{'type': 'text', 'text': ' sunny!', 'index': 0}] additional_kwargs={} response_metadata={} id='run--ed04d1cf-cd21-49ac-8b88-a6b234cb5066'
metadata:
{'langgraph_step': 3, 'langgraph_node': 'agent', 'langgraph_triggers': ('branch:to:agent',), 'langgraph_path': ('__pregel_pull', 'agent'), 'langgraph_checkpoint_ns': 'agent:e1ba2241-eba6-362f-f25c-f719c3c66593', 'checkpoint_ns': 'agent:e1ba2241-eba6-362f-f25c-f719c3c66593', 'ls_provider': 'amazon_bedrock', 'ls_model_name': 'us.anthropic.claude-sonnet-4-20250514-v1:0', 'ls_model_type': 'chat'}

token:
content='' additional_kwargs={} response_metadata={'stop_reason': 'end_turn', 'stop_sequence': None} id='run--ed04d1cf-cd21-49ac-8b88-a6b234cb5066'
metadata:
{'langgraph_step': 3, 'langgraph_node': 'agent', 'langgraph_triggers': ('branch:to:agent',), 'langgraph_path': ('__pregel_pull', 'agent'), 'langgraph_checkpoint_ns': 'agent:e1ba2241-eba6-362f-f25c-f719c3c66593', 'checkpoint_ns': 'agent:e1ba2241-eba6-362f-f25c-f719c3c66593', 'ls_provider': 'amazon_bedrock', 'ls_model_name': 'us.anthropic.claude-sonnet-4-20250514-v1:0', 'ls_model_type': 'chat'}

token:
content='' additional_kwargs={} response_metadata={'model_name': 'us.anthropic.claude-sonnet-4-20250514-v1:0'} id='run--ed04d1cf-cd21-49ac-8b88-a6b234cb5066' usage_metadata={'input_tokens': 463, 'output_tokens': 11, 'total_tokens': 474, 'input_token_details': {'cache_creation': 0, 'cache_read': 0}}
metadata:
{'langgraph_step': 3, 'langgraph_node': 'agent', 'langgraph_triggers': ('branch:to:agent',), 'langgraph_path': ('__pregel_pull', 'agent'), 'langgraph_checkpoint_ns': 'agent:e1ba2241-eba6-362f-f25c-f719c3c66593', 'checkpoint_ns': 'agent:e1ba2241-eba6-362f-f25c-f719c3c66593', 'ls_provider': 'amazon_bedrock', 'ls_model_name': 'us.anthropic.claude-sonnet-4-20250514-v1:0', 'ls_model_type': 'chat'}

2. stream_mode == "updates"

그래프의 각 단계마다 상태 업데이트를 스트리밍합니다. 여러 노드가 동시에 실행되면 각 업데이트가 별도로 스트리밍됩니다.

{"<노드명>": {"messages": [AIMessage | ToolMessage]}}
  • 노드명: 에이전트 이름 또는 LangGraph 노드명
  • messages: 메시지 객체 리스트 (AIMessage, ToolMessage)

예시 코드

stream = agent.stream(
    {"messages": [...]},
    stream_mode="updates",
)

for update in stream:
    for node, node_data in update.items():
        print("messages:", node_data.get("messages", []))

출력 예시

출력 예시
messages:
[AIMessage(content="I'll check the weather in Tokyo for you.", additional_kwargs={'usage': {'prompt_tokens': 381, 'completion_tokens': 64, 'cache_read_input_tokens': 0, 'cache_write_input_tokens': 0, 'total_tokens': 445}, 'stop_reason': 'tool_use', 'thinking': {}, 'model_id': 'us.anthropic.claude-sonnet-4-20250514-v1:0', 'model_name': 'us.anthropic.claude-sonnet-4-20250514-v1:0'}, response_metadata={'usage': {'prompt_tokens': 381, 'completion_tokens': 64, 'cache_read_input_tokens': 0, 'cache_write_input_tokens': 0, 'total_tokens': 445}, 'stop_reason': 'tool_use', 'thinking': {}, 'model_id': 'us.anthropic.claude-sonnet-4-20250514-v1:0', 'model_name': 'us.anthropic.claude-sonnet-4-20250514-v1:0'}, name='weather_agent', id='run--7fda82a4-55a2-4533-a125-f51f5e52f290-0', tool_calls=[{'name': 'get_weather', 'args': {'city': 'Tokyo'}, 'id': 'toolu_bdrk_019RcASj7iaMThafxxtscbDQ', 'type': 'tool_call'}], usage_metadata={'input_tokens': 381, 'output_tokens': 64, 'total_tokens': 445, 'input_token_details': {'cache_creation': 0, 'cache_read': 0}})]

messages:
[ToolMessage(content='The weather in Tokyo is sunny.', name='get_weather', id='9f8dbc1f-a4f8-4de3-a7d1-a3a06f68e1ea', tool_call_id='toolu_bdrk_019RcASj7iaMThafxxtscbDQ')]

messages:
[AIMessage(content='The weather in Tokyo is currently sunny!', additional_kwargs={'usage': {'prompt_tokens': 463, 'completion_tokens': 11, 'cache_read_input_tokens': 0, 'cache_write_input_tokens': 0, 'total_tokens': 474}, 'stop_reason': 'end_turn', 'thinking': {}, 'model_id': 'us.anthropic.claude-sonnet-4-20250514-v1:0', 'model_name': 'us.anthropic.claude-sonnet-4-20250514-v1:0'}, response_metadata={'usage': {'prompt_tokens': 463, 'completion_tokens': 11, 'cache_read_input_tokens': 0, 'cache_write_input_tokens': 0, 'total_tokens': 474}, 'stop_reason': 'end_turn', 'thinking': {}, 'model_id': 'us.anthropic.claude-sonnet-4-20250514-v1:0', 'model_name': 'us.anthropic.claude-sonnet-4-20250514-v1:0'}, name='weather_agent', id='run--aa84f6e8-7bb7-4c68-9194-c2df72c099b1-0', usage_metadata={'input_tokens': 463, 'output_tokens': 11, 'total_tokens': 474, 'input_token_details': {'cache_creation': 0, 'cache_read': 0}})]

stream_mode: list[str]

stream_mode리스트로 주면, 각 튜플 앞에 stream_mode 값이 추가됩니다.

(<stream_mode>, <해당 모드 스키마>)

stream_mode == ["messages", "updates"]

예시 코드

stream = agent.stream(
    {"messages": [HumanMessage(content="What's the weather in Tokyo?")]},
    stream_mode=["messages", "updates"],
)

for stream_mode, data in stream:
    print(f"stream_mode: [{stream_mode}]")
    print("stream_data:", data)

출력 예시

출력 예시
...

stream_mode: [messages]
stream_data
(AIMessageChunk(content='', additional_kwargs={}, response_metadata={'stop_reason': 'tool_use', 'stop_sequence': None}, id='run--28b5a6a3-2221-443d-b2ab-53bd4831af53'), {'langgraph_step': 1, 'langgraph_node': 'agent', 'langgraph_triggers': ('branch:to:agent',), 'langgraph_path': ('__pregel_pull', 'agent'), 'langgraph_checkpoint_ns': 'agent:6e722210-095e-1063-a6be-6372113ea677', 'checkpoint_ns': 'agent:6e722210-095e-1063-a6be-6372113ea677', 'ls_provider': 'amazon_bedrock', 'ls_model_name': 'us.anthropic.claude-sonnet-4-20250514-v1:0', 'ls_model_type': 'chat'})

stream_mode: [messages]
stream_data
(AIMessageChunk(content='', additional_kwargs={}, response_metadata={'model_name': 'us.anthropic.claude-sonnet-4-20250514-v1:0'}, id='run--28b5a6a3-2221-443d-b2ab-53bd4831af53', usage_metadata={'input_tokens': 381, 'output_tokens': 64, 'total_tokens': 445, 'input_token_details': {'cache_creation': 0, 'cache_read': 0}}), {'langgraph_step': 1, 'langgraph_node': 'agent', 'langgraph_triggers': ('branch:to:agent',), 'langgraph_path': ('__pregel_pull', 'agent'), 'langgraph_checkpoint_ns': 'agent:6e722210-095e-1063-a6be-6372113ea677', 'checkpoint_ns': 'agent:6e722210-095e-1063-a6be-6372113ea677', 'ls_provider': 'amazon_bedrock', 'ls_model_name': 'us.anthropic.claude-sonnet-4-20250514-v1:0', 'ls_model_type': 'chat'})

stream_mode: [updates]
stream_data
{'agent': {'messages': [AIMessage(content=[{'type': 'text', 'text': "I'll check the weather in Tokyo for you.", 'index': 0}, {'type': 'tool_use', 'id': 'toolu_bdrk_01HDPVXJEBrvJf5JUizUrJEs', 'name': 'get_weather', 'input': {}, 'index': 1, 'partial_json': '{"city": "Tokyo"}'}], additional_kwargs={}, response_metadata={'stop_reason': 'tool_use', 'stop_sequence': None, 'model_name': 'us.anthropic.claude-sonnet-4-20250514-v1:0'}, name='weather_agent', id='run--28b5a6a3-2221-443d-b2ab-53bd4831af53', tool_calls=[{'name': 'get_weather', 'args': {'city': 'Tokyo'}, 'id': 'toolu_bdrk_01HDPVXJEBrvJf5JUizUrJEs', 'type': 'tool_call'}], usage_metadata={'input_tokens': 381, 'output_tokens': 64, 'total_tokens': 445, 'input_token_details': {'cache_creation': 0, 'cache_read': 0}})]}}

stream_mode: [messages]
stream_data
(ToolMessage(content='The weather in Tokyo is sunny.', name='get_weather', id='4748357a-3e61-407c-a7d1-4e8f0dbd6152', tool_call_id='toolu_bdrk_01HDPVXJEBrvJf5JUizUrJEs'), {'langgraph_step': 2, 'langgraph_node': 'tools', 'langgraph_triggers': ('__pregel_push',), 'langgraph_path': ('__pregel_push', 0, False), 'langgraph_checkpoint_ns': 'tools:108cf825-f302-f9b7-50e8-600503c2e779'})

stream_mode: [updates]
stream_data
{'tools': {'messages': [ToolMessage(content='The weather in Tokyo is sunny.', name='get_weather', id='4748357a-3e61-407c-a7d1-4e8f0dbd6152', tool_call_id='toolu_bdrk_01HDPVXJEBrvJf5JUizUrJEs')]}}

...

subgraphs=True

subgraphs=False(기본값)에서는 위와 같은 단순 스키마가 나오지만, subgraphs=True일 경우에는 **네임스페이스(namespace)**가 추가됩니다.

# stream_mode: str
(namespace, data)

# stream_mode: list[str]
(namespace, stream_mode, data)
  • namespace: 하위 그래프 호출 경로를 포함하는 튜플 예: ("parent_node:<task_id>", "child_node:<task_id>")
  • stream_mode: 스트림 모드를 list[str]라면, 스트림 모드("messages", "updates") 값
  • data: 기존 messages/updates 스키마와 동일

langchain agent

예시 코드

stream = agent.stream(
    {"messages": [HumanMessage(content="What's the weather in Tokyo?")]},
    stream_mode=["messages", "updates"],
    subgraphs=True,
)

for namespace, mode, data in stream:
    print(f"namespace: {namespace}")
    print(f"stream_mode: {mode}")
    print("stream_data:", data)

출력 예시

출력 예시

...

namespace: [()]
stream_mode: [messages]
stream_data
(AIMessageChunk(content='', additional_kwargs={}, response_metadata={'model_name': 'us.anthropic.claude-sonnet-4-20250514-v1:0'}, id='run--a6a2e9f1-8b18-45aa-bfea-2a861e8687b9', usage_metadata={'input_tokens': 381, 'output_tokens': 64, 'total_tokens': 445, 'input_token_details': {'cache_creation': 0, 'cache_read': 0}}), {'langgraph_step': 1, 'langgraph_node': 'agent', 'langgraph_triggers': ('branch:to:agent',), 'langgraph_path': ('__pregel_pull', 'agent'), 'langgraph_checkpoint_ns': 'agent:df3ead51-7abc-3986-e43a-1eec555b039c', 'checkpoint_ns': 'agent:df3ead51-7abc-3986-e43a-1eec555b039c', 'ls_provider': 'amazon_bedrock', 'ls_model_name': 'us.anthropic.claude-sonnet-4-20250514-v1:0', 'ls_model_type': 'chat'})

namespace: [()]
stream_mode: [updates]
stream_data
{'agent': {'messages': [AIMessage(content=[{'type': 'text', 'text': "I'll check the weather in Tokyo for you.", 'index': 0}, {'type': 'tool_use', 'id': 'toolu_bdrk_01NmDpRsH2iFjAqeBx5JZJnp', 'name': 'get_weather', 'input': {}, 'index': 1, 'partial_json': '{"city": "Tokyo"}'}], additional_kwargs={}, response_metadata={'stop_reason': 'tool_use', 'stop_sequence': None, 'model_name': 'us.anthropic.claude-sonnet-4-20250514-v1:0'}, name='weather_agent', id='run--a6a2e9f1-8b18-45aa-bfea-2a861e8687b9', tool_calls=[{'name': 'get_weather', 'args': {'city': 'Tokyo'}, 'id': 'toolu_bdrk_01NmDpRsH2iFjAqeBx5JZJnp', 'type': 'tool_call'}], usage_metadata={'input_tokens': 381, 'output_tokens': 64, 'total_tokens': 445, 'input_token_details': {'cache_creation': 0, 'cache_read': 0}})]}}

namespace: [()]
stream_mode: [messages]
stream_data
(ToolMessage(content='The weather in Tokyo is sunny.', name='get_weather', id='5db02ec7-f332-4419-9e69-87bf34ab97a7', tool_call_id='toolu_bdrk_01NmDpRsH2iFjAqeBx5JZJnp'), {'langgraph_step': 2, 'langgraph_node': 'tools', 'langgraph_triggers': ('__pregel_push',), 'langgraph_path': ('__pregel_push', 0, False), 'langgraph_checkpoint_ns': 'tools:67ab8f27-25ad-45bc-d869-9574aea30afb'})

namespace: [()]
stream_mode: [updates]
stream_data
{'tools': {'messages': [ToolMessage(content='The weather in Tokyo is sunny.', name='get_weather', id='5db02ec7-f332-4419-9e69-87bf34ab97a7', tool_call_id='toolu_bdrk_01NmDpRsH2iFjAqeBx5JZJnp')]}}

...

langgraph 서브그래프

LangGraph에서 서브그래프를 설정하면 스트림은 다음과 같이 나타납니다.

예시

class SubgraphState(TypedDict):
    foo: str  # note that this key is shared with the parent graph state
    bar: str

def subgraph_node_1(state: SubgraphState):
    return {"bar": "bar"}

def subgraph_node_2(state: SubgraphState):
    return {"foo": state["foo"] + state["bar"]}

subgraph_builder = StateGraph(SubgraphState)
subgraph_builder.add_node(subgraph_node_1)
subgraph_builder.add_node(subgraph_node_2)
subgraph_builder.add_edge(START, "subgraph_node_1")
subgraph_builder.add_edge("subgraph_node_1", "subgraph_node_2")
subgraph = subgraph_builder.compile()

# Define parent graph
class ParentState(TypedDict):
    foo: str

def node_1(state: ParentState):
    return {"foo": "hi! " + state["foo"]}

builder = StateGraph(ParentState)
builder.add_node("node_1", node_1)
builder.add_node("node_2", subgraph)
builder.add_edge(START, "node_1")
builder.add_edge("node_1", "node_2")
graph = builder.compile()

for chunk in graph.stream(
    {"foo": "foo"},
    stream_mode="updates",
    subgraphs=True, 
):
    print(chunk)

출력 예시

((), {'node_1': {'foo': 'hi! foo'}})
(('node_2:...task_id...',), {'subgraph_node_1': {'bar': 'bar'}})
(('node_2:...task_id...',), {'subgraph_node_2': {'foo': 'hi! foobar'}})
((), {'node_2': {'foo': 'hi! foobar'}})

정리

  • messages 모드: LLM 출력/도구 호출을 토큰 단위 메시지로 받음
  • updates 모드: 그래프 상태 업데이트를 받음
  • 리스트 모드: 동시에 여러 모드를 함께 스트리밍
  • subgraphs=True: 네임스페이스가 추가되어 부모/자식 그래프 경로 구분 가능

즉, 상황에 맞게 stream_modesubgraphs를 조합하면 원하는 수준의 스트림 세부정보를 얻을 수 있습니다.

Edit this page