{"id":31978,"date":"2025-08-27T05:00:00","date_gmt":"2025-08-27T03:00:00","guid":{"rendered":"https:\/\/sii.pl\/blog\/?p=31978"},"modified":"2025-11-25T16:56:30","modified_gmt":"2025-11-25T15:56:30","slug":"the-human-in-the-loop-concept-human-involvement-in-ai-agents-decision-making-processes-with-langgraph","status":"publish","type":"post","link":"https:\/\/sii.pl\/blog\/en\/the-human-in-the-loop-concept-human-involvement-in-ai-agents-decision-making-processes-with-langgraph\/","title":{"rendered":"The Human-in-the-loop concept: Human involvement in AI agents&#8217; decision-making processes (with LangGraph)"},"content":{"rendered":"\n<p>Before we begin, we recommend familiarizing yourself with a few key definitions:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>LLM (Large Language Model) \u2013 a deep learning algorithm capable of performing natural language processing (NLP) tasks.<\/li>\n\n\n\n<li>Prompt \u2013 a natural language query provided to an LLM.<\/li>\n\n\n\n<li>Augmented LLM (Augmented Large Language Model) \u2013 an enhanced language model that uses an additional knowledge base (e.g., a company&#8217;s knowledge base), supplementary tools to improve the response quality.<\/li>\n\n\n\n<li>RAG (Retrieval-Augmented Generation) \u2013 a technique where the language model generates answers based on documents retrieved from external knowledge sources.<\/li>\n\n\n\n<li>AI Agent \u2013 an autonomous AI system capable of making decisions and executing actions using language models and tools.<\/li>\n\n\n\n<li>Graph \u2013 a workflow (process) in LangGraph, with defined nodes and their connections (actions).<\/li>\n\n\n\n<li>Orchestration \u2013 managing, coordinating, and controlling workflows to ensure consistent and harmonious operation.<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\"><strong><strong>Introduction to Human-in-the-loop<\/strong><\/strong><\/h2>\n\n\n\n<p>The concept of <strong>Human-in-the-loop<\/strong> (HILP) refers to human involvement in the decision-making processes of AI systems.<\/p>\n\n\n\n<p>Although the <strong>HILP<\/strong> approach has existed in IT for a long time, it has become particularly valuable within AI. Although artificial intelligence systems are highly effective today, there remain scenarios where human judgment and verification are indispensable \u2013 especially when precision and the safety of outcomes are crucial.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong><strong>How exactly does the HILP work?<\/strong><\/strong><\/h2>\n\n\n\n<p>When an AI system encounters a scenario not covered by its predefined set of rules, it pauses the process (specifically, the workflow loop) to await human input. This human-provided response enables the AI agent to continue the process in the direction specified by the human operator.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong><strong>LangGraph: Building AI-driven processes<\/strong><\/strong><\/h2>\n\n\n\n<p>AI Agent applications can be developed using the company&#8217;s basic SDK libraries to create language models. One of the most popular is <a href=\"https:\/\/openai.github.io\/openai-agents-python\/\" target=\"_blank\" rel=\"noopener\" title=\"\" rel=\"nofollow\" >the OpenAI SDK<\/a>, which is used to communicate with ChatGPT.<\/p>\n\n\n\n<p>However, alongside the evolution of LLM-based software, additional frameworks were also developed. These frameworks enable integration with models from multiple providers and offer many supplementary features that simplify agent creation and orchestration.<\/p>\n\n\n\n<p>One of the most popular frameworks currently is <strong>LangGraph<\/strong>, a library designed for building advanced decision-making workflows using LLMs (such as ChatGPT and others). It enables seamless integration of various components, including tools, external memory, and the core concept of human-model interaction within a specific graph (workflow).<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><strong><strong>What is the Process Graph?<\/strong><\/strong><\/h3>\n\n\n\n<p>LangGraph allows workflow stages to be defined as a graph composed of nodes, each with a distinct purpose.<\/p>\n\n\n\n<p>Below are several node types we will use in our example:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Agent Node<\/strong> \u2013 responsible for communication with the LLM.<\/li>\n\n\n\n<li><strong>Tool Node<\/strong> \u2013 a tool that searches an external knowledge base.<\/li>\n\n\n\n<li><strong>Human Decision Node<\/strong> \u2013 where the user interacts with the process by accepting or rejecting the Agent&#8217;s work.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\"><strong><strong>The RAG technique<\/strong><\/strong><\/h3>\n\n\n\n<p>To build our augmented LLM, we need one more technique. In the example provided below, we used the <strong>RAG technique<\/strong>.<\/p>\n\n\n\n<p>RAG (Retrieval-Augmented Generation) extends the capabilities of an LLM&#8217;s built-in knowledge base by incorporating an additional retrieval step, which grants the model access to external data sources.<\/p>\n\n\n\n<p>Thanks to this, the model can generate more accurate responses, as it incorporates context from supplementary data sources, such as a company&#8217;s FAQ. The AI chatbot application can use these data sources to provide precise answers to the company&#8217;s customers.<\/p>\n\n\n\n<p>In our code, <strong>RAG<\/strong> allows the retrieval of documents indexed in a vector database.<\/p>\n\n\n\n<p><strong>Note<\/strong>: The subject of vector databases is very wide and beyond the scope of the current article. Nevertheless, they are crucial for converting text into mathematical representations of data used by LLM algorithms.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong><strong>Human-in-the-loop \u2013 LangGraph implementation<\/strong><\/strong><\/h2>\n\n\n\n<p>LangGraph is a framework providing an SDK for two programming languages: Python and JavaScript. In our example, we decide to use Python.<\/p>\n\n\n\n<p>As the &#8220;brain&#8221; of our system, we used ChatGPT&#8217;s LLM in version <strong>4o-mini<\/strong>. Therefore, to run the application, you must first generate an <a href=\"https:\/\/platform.openai.com\/api-keys\" target=\"_blank\" rel=\"noopener\" title=\"\" rel=\"nofollow\" >API key<\/a> from the official OpenAI platform. You&#8217;ll also need to create a user account if you don&#8217;t already have one.<\/p>\n\n\n\n<p>In addition, using the OpenAI API may generate a minor costs, especially if we choose NOT to share the conversation data for LLM training purposes. <a href=\"https:\/\/platform.openai.com\/settings\/organization\/data-controls\/sharing\" target=\"_blank\" rel=\"noopener\" title=\"\" rel=\"nofollow\" >You can find more information about this here<\/a>.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><strong><strong>Development environment setup<\/strong><\/strong><\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Python interpreter installed (version 3.10.x+)<\/li>\n\n\n\n<li>optionally \u2013 if you need a debugger \u2013 install the python3.10-dev library<\/li>\n\n\n\n<li>pip package manager installed, including packages:\n<ul class=\"wp-block-list\">\n<li>pip install chromadb + pip install langchain-unstructured<\/li>\n<\/ul>\n<\/li>\n\n\n\n<li>API key from the OpenAI platform: <a href=\"https:\/\/platform.openai.com\/api-keys\" target=\"_blank\" rel=\"noopener\" title=\"\" rel=\"nofollow\" >https:\/\/platform.openai.com\/api-keys<\/a>\n<ul class=\"wp-block-list\">\n<li>please save your API key inside the application&#8217;s directory, e.g., in .env\n<ul class=\"wp-block-list\">\n<li>example of .env file, you can find it at the end of this article<\/li>\n<\/ul>\n<\/li>\n\n\n\n<li>knowledge-base.txt (file you can find at the end of this article) \u2013 place this file within your application directory<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n\n\n\n<p>Recommended application structure:<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: python; title: ; notranslate\" title=\"\">\n|-- \/projects\/hitl-example-app\/\n  |\n  |-- .env (environment variables)\n  |-- app-hitl.py (application code)\n  |-- knowledge-base.txt (local knowledge base) \n<\/pre><\/div>\n\n\n<h3 class=\"wp-block-heading\"><strong><strong>Let&#8217;s start programming!<\/strong><\/strong><\/h3>\n\n\n\n<p>We start by importing the necessary libraries and packages (using <strong>import<\/strong> and <strong>from<\/strong> statements).<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: python; title: ; notranslate\" title=\"\">\n&quot;&quot;&quot;\nBase Python libraries\n&quot;&quot;&quot;\nimport os\nimport uuid\n\nfrom pathlib import Path\nfrom typing import Literal\nfrom dotenv import load_dotenv\nfrom IPython.display import Image, display\n\n&quot;&quot;&quot;\nRequired LangGraph and LangChain libraries\n&quot;&quot;&quot;\nfrom langchain_core.messages import SystemMessage\nfrom langchain_core.tools import tool\nfrom langchain_openai import ChatOpenAI, OpenAIEmbeddings\n\nfrom langgraph.checkpoint.memory import MemorySaver\nfrom langgraph.graph import StateGraph, MessagesState, START, END\nfrom langgraph.prebuilt import ToolNode\nfrom langgraph.types import Command, interrupt\n\n&quot;&quot;&quot;\nContext search libraries\n&quot;&quot;&quot;\nfrom langchain_community.vectorstores import Chroma\nfrom langchain_community.vectorstores.utils import filter_complex_metadata\nfrom langchain_unstructured import UnstructuredLoader\nfrom langchain_text_splitters import RecursiveCharacterTextSplitter\n<\/pre><\/div>\n\n\n<p>Then, we perform basic script configuration:<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: python; title: ; notranslate\" title=\"\">\n&quot;&quot;&quot;\n-------------\nConfiguration\n-------------\n\nEnable detailed logging for model responses (optional)\n&quot;&quot;&quot;\ndetailed_model_response = False\n\n&quot;&quot;&quot;\nLoad environment variables such as API keys\n&quot;&quot;&quot;\nload_dotenv(dotenv_path=Path(&quot;.env&quot;))\n\n&quot;&quot;&quot;\n--------------------------------------------------------------------------------------------------------------------\nData source \/ Knowledge Base configuration\n--------------------------------------------------------------------------------------------------------------------\n\nSpecify the path to the knowledge base file, which will be vectorized\n&quot;&quot;&quot;\ndatasource = &#039;knowledge-base.txt&#039;\n\n&quot;&quot;&quot;\n--------------------------------------------------------------------------------------------------------------------\nBuild memory\n--------------------------------------------------------------------------------------------------------------------\n\nDefine a checkpointer responsible for saving and restoring our graph state (messages, current node settings, process state, etc.).\nIMPORTANT: In this example, we decide to use the non-persistent in-memory state saver.\n&quot;&quot;&quot;\nmemory = MemorySaver()\n<\/pre><\/div>\n\n\n<p>In the next step, we need to configure the model and tools that the AI Agent will use:<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: python; title: ; notranslate\" title=\"\">\n&quot;&quot;&quot;\nDefine a tool function (using the `@tool` annotation) to search the knowledge base and augment the LLM\u2019s database.\nThe function receives the query param content from the LLM (via the `query` parameter).\nEach time it\u2019s called, it loads the data from a file, splits it into chunks, and converts every chunk into the vector format.\n&quot;&quot;&quot;\n@tool\ndef context_searcher(query: str):\n    &quot;&quot;&quot;Search the relevant documents&quot;&quot;&quot;\n\n    loader = UnstructuredLoader(datasource)\n    document = loader.load()\n\n    text_splitter = RecursiveCharacterTextSplitter(chunk_size=100, chunk_overlap=50)\n    split_documents = text_splitter.split_documents(document)\n    filtered_documents = filter_complex_metadata(split_documents)\n\n    vectorstore = Chroma.from_documents(\n        documents=filtered_documents,\n        collection_name=&quot;knowledge-base&quot;,\n        embedding=OpenAIEmbeddings(),\n    )\n\n    retriever = vectorstore.as_retriever()\n    results = retriever.invoke(query)\n\n    return &quot;\\n&quot;.join(&#x5B;doc.page_content for doc in results])\n\n&quot;&quot;&quot;\n--------------------------------------------------------------------------------------------------------------------\nAI Agent\n--------------------------------------------------------------------------------------------------------------------\n\nDefine the list of tools available for LLM (model)\n&quot;&quot;&quot;\ntools = &#x5B;context_searcher]\n\n&quot;&quot;&quot;\nInitialize the LLM and attach available tools to LLM\n&quot;&quot;&quot;\nopenai_api_key = os.getenv(&quot;OPENAI_API_KEY&quot;)\nmodel = ChatOpenAI(model=&quot;gpt-4o-mini&quot;, temperature=0, api_key=openai_api_key).bind_tools(tools)\n\n&quot;&quot;&quot;\nIMPORTANT: The `temperature` is a key parameter in building agents \u2014 it controls the randomness of generated responses.\nWhen building agents based on a custom knowledge base, it is recommended to use a deterministic approach (no random responses),\nwhich is represented here by setting the temperature to 0.\n&quot;&quot;&quot;\n<\/pre><\/div>\n\n\n<p>The next step is defining the functions used in the graph\/workflow that are needed by the Human-in-the-loop (HILP) concept:<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: python; title: ; notranslate\" title=\"\">\n&quot;&quot;&quot;\nhuman_interaction() \u2013 the most important function in the whole HILP concept - it pauses the workflow and waits for human interaction.\nNOTE: every function used in a graph node must define the possible exit paths to other nodes in process. It was reached by using the action definition, e.g.:\n`Command&#x5B;Literal&#x5B;&quot;human_approved&quot;, &quot;human_rejected&quot;, END]]`. \nFinally, after the human&#039;s decision, the process follows to the appropriate node (&quot;human_approved&quot; or &quot;human_rejected&quot;), or terminates (END).\n&quot;&quot;&quot;\ndef human_interaction(state: MessagesState) -&gt; Command&#x5B;Literal&#x5B;&quot;human_approved&quot;, &quot;human_rejected&quot;, END]]:\n\n    &quot;&quot;&quot;IMPORTANT: The LangGraph&#039;s `interrupt()` function pauses the graph\/process and prompts the user.&quot;&quot;&quot;\n    answer = interrupt(\n        {\n            &quot;question&quot;: &quot;Hi human! :) If the answer are correct? Type: `y` or `n`&quot;,\n        }\n    )\n\n    print(&quot;&gt;&gt;&gt; Agent message &lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;\\n\\n&quot;)\n    print(&quot;Your answer: &quot;, answer, &quot;\\n\\n&quot;)\n\n    &quot;&quot;&quot;\n    Based on the user&#039;s response, the process is routed via a command to the appropriate graph node.\n    &quot;&quot;&quot;\n    if answer == &quot;y&quot;:\n        return Command(goto=&quot;human_approved&quot;)\n    if answer == &quot;n&quot;:\n        return Command(goto=&quot;human_rejected&quot;)\n    else:\n        print(&quot;Unsupported answer. Terminating...&quot;)\n        return Command(goto=END)\n\ndef human_approved(state: MessagesState) -&gt; Command&#x5B;END]:\n    print(&quot;&gt;&gt;&gt; Agent message &lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;\\n\\n&quot;)\n    print(&quot;\u2705 Do something in approved path.&quot;)\n    return Command(goto=END)\n\ndef human_rejected(state: MessagesState) -&gt; Command&#x5B;END]:\n    print(&quot;&gt;&gt;&gt; Agent message &lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;\\n\\n&quot;)\n    print(&quot;\u274c Do something in rejected path.&quot;)\n    return Command(goto=END)\n<\/pre><\/div>\n\n\n<p>Now that we have the necessary node functions for HILP, so, now we can move on to graph modeling:<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: python; title: ; notranslate\" title=\"\">\n&quot;&quot;&quot;\n--------------------------------------------------------------------------------------------------------------------\nBuild state graph workflow\n--------------------------------------------------------------------------------------------------------------------\n\nIMPORTANT: Below we got the few utility functions used only by the AI Agent.\n\ncall_model() \u2013 sends a prompt to the LLM and stores the model\u2019s response into the graphs&#039; state (please, take a look at the `memory` variable).\n&quot;&quot;&quot;\ndef call_model(state: MessagesState):\n    messages = state&#x5B;&#039;messages&#039;]\n    response = model.invoke(messages)\n    return {&quot;messages&quot;: &#x5B;response]}\n\n\n&quot;&quot;&quot;\nThis is a controller function (known as a conditional edge in LangGraph nomenclature).\nAfter receiving a response from the LLM, the controller helps to determine the next node, e.g.: whether to use a tool or ask a human.\n&quot;&quot;&quot;\ndef should_continue(state: MessagesState) -&gt; Literal&#x5B;&quot;tools&quot;, &quot;human_interaction&quot;]:\n    messages = state&#x5B;&#039;messages&#039;]\n    last_message = messages&#x5B;-1]\n    if last_message.tool_calls: # If the LLM need to use a tool, route to the tool node\n        return &quot;tools&quot;\n\n    return &quot;human_interaction&quot;\n\n&quot;&quot;&quot;\nInitialize the graph (workflow)\n&quot;&quot;&quot;\ngraph_builder = StateGraph(MessagesState)\n\n&quot;&quot;&quot;\nAdd the agent node\n&quot;&quot;&quot;\ngraph_builder.add_node(&quot;agent&quot;, call_model)\n\n&quot;&quot;&quot;\nAdd the tool node, which the agent and LLM can use\n&quot;&quot;&quot;\ngraph_builder.add_node(&quot;tools&quot;, ToolNode(tools))\n\n&quot;&quot;&quot;\nAdd the nodes required by the HILP process\n&quot;&quot;&quot;\ngraph_builder.add_node(&quot;human_interaction&quot;, human_interaction)\ngraph_builder.add_node(&quot;human_approved&quot;, human_approved)\ngraph_builder.add_node(&quot;human_rejected&quot;, human_rejected)\n\n&quot;&quot;&quot;\nFinally, configure the entry and exit points (START, END) for the workflow, and the conditional edge, which allows \nthe agent to make routing decisions\n&quot;&quot;&quot;\ngraph_builder.add_edge(START, &quot;agent&quot;)\ngraph_builder.add_conditional_edges(&quot;agent&quot;, should_continue)\ngraph_builder.add_edge(&quot;tools&quot;, &quot;agent&quot;)\n<\/pre><\/div>\n\n\n<p>The graph will look like the following:<\/p>\n\n\n\n<figure class=\"wp-block-image aligncenter size-full\"><img decoding=\"async\" width=\"457\" height=\"432\" src=\"https:\/\/sii.pl\/blog\/wp-content\/uploads\/2025\/08\/graph-visualisation.png\" alt=\"the graph\" class=\"wp-image-31963\" srcset=\"https:\/\/sii.pl\/blog\/wp-content\/uploads\/2025\/08\/graph-visualisation.png 457w, https:\/\/sii.pl\/blog\/wp-content\/uploads\/2025\/08\/graph-visualisation-300x284.png 300w\" sizes=\"(max-width: 457px) 100vw, 457px\" \/><figcaption class=\"wp-element-caption\">Fig. 1 The graph<\/figcaption><\/figure>\n\n\n\n<p>Now all that remains is to compile the graph and run the application:<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: python; title: ; notranslate\" title=\"\">\n&quot;&quot;&quot;\n--------------------------------------------------------------------------------------------------------------------\nCompile and run\n--------------------------------------------------------------------------------------------------------------------\n\nCompile configured graph, specifying the memory module for state storage (`checkpointer=memory`)\n&quot;&quot;&quot;\ngraph_config = {&quot;configurable&quot;: {&quot;thread_id&quot;: uuid.uuid4()}}\ncompiled_graph = graph_builder.compile(checkpointer=memory)\n\n&quot;&quot;&quot;\nDefine the prompt \u2014 a specific query for the LLM\nNote that the prompt also indicates the data source (our knowledge base) and specifies the expected response format.\n\nInterestingly, the knowledge base file was intentionally obfuscated with irrelevant information to demonstrate\nhow the model can handle with noise and extract the relevant data ;)\n&quot;&quot;&quot;\nprompt = {&quot;messages&quot;: &#x5B;\n        SystemMessage(content=&quot;Provide the temperature for all cities described in `knowledge-base`. Respond in JSON format without any additional text (JSON only without markdown).&quot;)\n]}\n\n&quot;&quot;&quot;\nDisplay messages generated during the process \u2014 up until to the point when it waiting for human interaction\n&quot;&quot;&quot;\nfor event in compiled_graph.stream(prompt, config=graph_config, stream_mode=&quot;values&quot;):\n    stream_parser(event)\n\n&quot;&quot;&quot;\nReceive input from human\n&quot;&quot;&quot;\nhuman_response_input = input()\n&quot;&quot;&quot;\nFinally, we continue streaming messages after receive the human answer\n&quot;&quot;&quot;\nfor event in compiled_graph.stream(Command(resume=human_response_input), config=graph_config, stream_mode=&quot;updates&quot;):\n    stream_parser(event)\n<\/pre><\/div>\n\n\n<p>Complete application code You can find at the end of this article.<\/p>\n\n\n\n<p>Now, let&#8217;s take a look at the application&#8217;s console output after running it:<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: python; title: ; notranslate\" title=\"\">\n================================ System Message ================================\nProvide the temperature for all cities described in `knowledge-base`. Respond in JSON format without any additional \ntext (JSON only without markdown).\n================================== Ai Message ==================================\nTool Calls:\n  context_searcher (call_kZ7MnU6UjP2KlHd80P6bdok7)\n Call ID: call_kZ7MnU6UjP2KlHd80P6bdok7\n  Args:\n    query: temperature\n================================= Tool Message =================================\nName: context_searcher\n\nHere are some words to confuse the LLM and create confusion in the context. The average temperature\nconfusion in the context. The average temperature in Bialystok is 27 C.\nMeanwhile, Warsaw can boast an average of 22.5 C.\nHowever, in Gdansk it can be 25 C. And here is some more content to confuse the model again and see\n================================== Ai Message ==================================\nTool Calls:\n  context_searcher (call_L46mEnkzlfjfJu3DCZZwhhMF)\n Call ID: call_L46mEnkzlfjfJu3DCZZwhhMF\n  Args:\n    query: Bialystok temperature\n  context_searcher (call_9TgcqpBqjp4PayXy1mP0DL7w)\n Call ID: call_9TgcqpBqjp4PayXy1mP0DL7w\n  Args:\n    query: Warsaw temperature\n  context_searcher (call_2mWkttSW8tbo8sG25NpXCrYT)\n Call ID: call_2mWkttSW8tbo8sG25NpXCrYT\n  Args:\n    query: Gdansk temperature\n================================= Tool Message =================================\nName: context_searcher\n\nMeanwhile, Warsaw can boast an average of 22.5 C.\nMeanwhile, Warsaw can boast an average of 22.5 C.\nMeanwhile, Warsaw can boast an average of 22.5 C.\nMeanwhile, Warsaw can boast an average of 22.5 C.\n================================== Ai Message ==================================\nTool Calls:\n  context_searcher (call_7h7SIsFlIvXJ7ZamsMN36J6g)\n Call ID: call_7h7SIsFlIvXJ7ZamsMN36J6g\n  Args:\n    query: Bialystok\n  context_searcher (call_KzuJXj6S1MioIbnc7p9KBTH3)\n Call ID: call_KzuJXj6S1MioIbnc7p9KBTH3\n  Args:\n    query: Warsaw\n  context_searcher (call_uYle26b7Dd1rrQgEq8WWP5t3)\n Call ID: call_uYle26b7Dd1rrQgEq8WWP5t3\n  Args:\n    query: Gdansk\n================================= Tool Message =================================\nName: context_searcher\n\nMeanwhile, Warsaw can boast an average of 22.5 C.\nMeanwhile, Warsaw can boast an average of 22.5 C.\nMeanwhile, Warsaw can boast an average of 22.5 C.\nMeanwhile, Warsaw can boast an average of 22.5 C.\n================================== Ai Message ==================================\n{\n  &quot;Bialystok&quot;: &quot;27 C&quot;,\n  &quot;Warsaw&quot;: &quot;22.5 C&quot;,\n  &quot;Gdansk&quot;: &quot;25 C&quot;\n}\n&gt;&gt;&gt; User interaction &lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;\n{&#039;question&#039;: &#039;Hi human! :) If the answer are correct? Type: `y` or `n`&#039;}\n\ny\n&gt;&gt;&gt; Agent message &lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;\nYour answer:  y \n\n&gt;&gt;&gt; Agent message &lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;\nDo something in approved path.\n<\/pre><\/div>\n\n\n<h3 class=\"wp-block-heading\"><strong><strong>What exactly happened here?<\/strong><\/strong><\/h3>\n\n\n\n<p>Firstly, the system used a predefined prompt to retrieve information about the current temperature in various cities from Poland (the LLM performed this task without explicit instructions regarding any city name).<\/p>\n\n\n\n<p>Additionally, the LLM successfully retrieved the requested information despite some data being partially obscured. Finally, it requested a human to confirm his own response.<\/p>\n\n\n\n<figure class=\"wp-block-image aligncenter size-full\"><a href=\"https:\/\/sii.pl\/en\/job-ads\/\" target=\"_blank\" rel=\"noreferrer noopener\"><img decoding=\"async\" width=\"737\" height=\"170\" src=\"https:\/\/sii.pl\/blog\/wp-content\/uploads\/2025\/08\/praca-EN-k-5.jpg\" alt=\"job offer\" class=\"wp-image-31980\" srcset=\"https:\/\/sii.pl\/blog\/wp-content\/uploads\/2025\/08\/praca-EN-k-5.jpg 737w, https:\/\/sii.pl\/blog\/wp-content\/uploads\/2025\/08\/praca-EN-k-5-300x69.jpg 300w\" sizes=\"(max-width: 737px) 100vw, 737px\" \/><\/a><\/figure>\n\n\n\n<h2 class=\"wp-block-heading\"><strong><strong>Summary<\/strong><\/strong><\/h2>\n\n\n\n<p>As you&#8217;ve seen, implementing the Human-in-the-loop concept using LangGraph is relatively straightforward.&nbsp;<\/p>\n\n\n\n<p>Importantly, it enables the effective combination of LLM capabilities with additional context (a knowledge base in our example) and with the essential human oversight.<\/p>\n\n\n\n<p>That solution significantly elevates the utility of AI-driven applications to an unprecedented level.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong><strong>Complete application code<\/strong><\/strong><\/h2>\n\n\n\n<p>The <strong>.env<\/strong> file (environment variables) content:<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: python; title: ; notranslate\" title=\"\">\nOPENAI_API_KEY=TWOJ_KLUCZ_OPEN_AI\nANONYMIZED_TELEMETRY=False\n<\/pre><\/div>\n\n\n<p>The <strong>knowledge-base.txt<\/strong> file content:<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: python; title: ; notranslate\" title=\"\">\nHere are some words to confuse the LLM and create confusion in the context. The average temperature in Bialystok is 27 C.\nHowever, in Gdansk it can be 25 C. And here is some more content to confuse the model again and see how it handles with this.\nMeanwhile, Warsaw can boast an average of 22.5 C.\n<\/pre><\/div>\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: python; title: ; notranslate\" title=\"\">\n&quot;&quot;&quot;\nBase Python libraries\n&quot;&quot;&quot;\nimport os\nimport uuid\n\nfrom pathlib import Path\nfrom typing import Literal\nfrom dotenv import load_dotenv\nfrom IPython.display import Image, display\n\n&quot;&quot;&quot;\nRequired LangGraph and LangChain libraries\n&quot;&quot;&quot;\nfrom langchain_core.messages import SystemMessage\nfrom langchain_core.tools import tool\nfrom langchain_openai import ChatOpenAI, OpenAIEmbeddings\n\nfrom langgraph.checkpoint.memory import MemorySaver\nfrom langgraph.graph import StateGraph, MessagesState, START, END\nfrom langgraph.prebuilt import ToolNode\nfrom langgraph.types import Command, interrupt\n\n&quot;&quot;&quot;\nContext search libraries\n&quot;&quot;&quot;\nfrom langchain_community.vectorstores import Chroma\nfrom langchain_community.vectorstores.utils import filter_complex_metadata\nfrom langchain_unstructured import UnstructuredLoader\nfrom langchain_text_splitters import RecursiveCharacterTextSplitter\n\n&quot;&quot;&quot;\n-------------\nConfiguration\n-------------\n\nEnable more details in LLM response (debug logs)\n&quot;&quot;&quot;\n\ndetailed_model_response = False\n\n&quot;&quot;&quot;\nLoad external environment variables contained project specific data (API keys etc.)\n&quot;&quot;&quot;\nload_dotenv(dotenv_path=Path(&quot;.env&quot;))\n\n&quot;&quot;&quot;\n--------------------------------------------------------------------------------------------------------------------\nData source \/ Knowledge Base configuration\n--------------------------------------------------------------------------------------------------------------------\n&quot;&quot;&quot;\n\ndatasource = &#039;knowledge-base.txt&#039;\n\n&quot;&quot;&quot;\n--------------------------------------------------------------------------------------------------------------------\nBuild memory\n--------------------------------------------------------------------------------------------------------------------\n&quot;&quot;&quot;\n\nmemory = MemorySaver()\n\n&quot;&quot;&quot;\n--------------------------------------------------------------------------------------------------------------------\nUtils\n--------------------------------------------------------------------------------------------------------------------\n&quot;&quot;&quot;\n\ndef stream_parser(stream_message):\n  if &quot;messages&quot; in stream_message:\n    stream_message&#x5B;&quot;messages&quot;]&#x5B;-1].pretty_print()\n  if &quot;__interrupt__&quot; in stream_message:\n    print(&quot;&gt;&gt;&gt; User interaction &lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;\\n\\n&quot;)\n    print(stream_message&#x5B;&quot;__interrupt__&quot;]&#x5B;-1].value)\n  else:\n    if detailed_model_response:\n      print(stream_message)\n\n  print(&quot;\\n&quot;)\n\n&quot;&quot;&quot;\n--------------------------------------------------------------------------------------------------------------------\nTools\n--------------------------------------------------------------------------------------------------------------------\n\nTool required to search the relevant documents (like prepared knowledge-base)\n&quot;&quot;&quot;\n@tool\ndef context_searcher(query: str):\n  &quot;&quot;&quot;Search the relevant documents&quot;&quot;&quot;\n\n  loader = UnstructuredLoader(datasource)\n  document = loader.load()\n\n  text_splitter = RecursiveCharacterTextSplitter(chunk_size=100, chunk_overlap=50)\n  split_documents = text_splitter.split_documents(document)\n  filtered_documents = filter_complex_metadata(split_documents)\n\n  vectorstore = Chroma.from_documents(\n    documents=filtered_documents,\n    collection_name=&quot;knowledge-base&quot;,\n    embedding=OpenAIEmbeddings(),\n  )\n\n  retriever = vectorstore.as_retriever()\n  results = retriever.invoke(query)\n\n  return &quot;\\n&quot;.join(&#x5B;doc.page_content for doc in results])\n\n&quot;&quot;&quot;\n--------------------------------------------------------------------------------------------------------------------\nAI Agent\n--------------------------------------------------------------------------------------------------------------------\n\nList of available tools\n&quot;&quot;&quot;\ntools = &#x5B;context_searcher]\n\n&quot;&quot;&quot;\nInitialize the LLM model and attach tools\n&quot;&quot;&quot;\nopenai_api_key = os.getenv(&quot;OPENAI_API_KEY&quot;)\nmodel = ChatOpenAI(model=&quot;gpt-4o-mini&quot;, temperature=0, api_key=openai_api_key).bind_tools(tools)\n\n&quot;&quot;&quot;\n--------------------------------------------------------------------------------------------------------------------\nFunctions required by Human in the loop concept\n--------------------------------------------------------------------------------------------------------------------\n&quot;&quot;&quot;\n\ndef human_interaction(state: MessagesState) -&gt; Command&#x5B;Literal&#x5B;&quot;human_approved&quot;, &quot;human_rejected&quot;, END]]:\n  &quot;&quot;&quot;note: we not use the state in the current graph example - remember the state contains the LLM context&quot;&quot;&quot;\n  answer = interrupt(\n    {\n      &quot;question&quot;: &quot;Hi human! :) If the answer are correct? Type: `y` or `n`&quot;,\n    }\n  )\n\n  print(&quot;&gt;&gt;&gt; Agent message &lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;\\n\\n&quot;)\n  print(&quot;Your answer: &quot;, answer, &quot;\\n\\n&quot;)\n\n  if answer == &quot;y&quot;:\n    return Command(goto=&quot;human_approved&quot;)\n  if answer == &quot;n&quot;:\n    return Command(goto=&quot;human_rejected&quot;)\n  else:\n    print(&quot;Unsupported answer. Terminating...&quot;)\n    return Command(goto=END)\n\ndef human_approved(state: MessagesState) -&gt; Command&#x5B;END]:\n  print(&quot;&gt;&gt;&gt; Agent message &lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;\\n\\n&quot;)\n  print(&quot;\u2705 Do something in approved path.&quot;)\n  return Command(goto=END)\n\ndef human_rejected(state: MessagesState) -&gt; Command&#x5B;END]:\n  print(&quot;&gt;&gt;&gt; Agent message &lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;\\n\\n&quot;)\n  print(&quot;\u274c Do something in rejected path.&quot;)\n  return Command(goto=END)\n\n\n&quot;&quot;&quot;\n--------------------------------------------------------------------------------------------------------------------\nBuild state graph workflow\n--------------------------------------------------------------------------------------------------------------------\n\nGet LLM response function\n&quot;&quot;&quot;\ndef call_model(state: MessagesState):\n  messages = state&#x5B;&#039;messages&#039;]\n  response = model.invoke(messages)\n  return {&quot;messages&quot;: &#x5B;response]}\n\n&quot;&quot;&quot;\nConditional-edge function\n&quot;&quot;&quot;\ndef should_continue(state: MessagesState) -&gt; Literal&#x5B;&quot;tools&quot;, &quot;human_interaction&quot;]:\n  messages = state&#x5B;&#039;messages&#039;]\n  last_message = messages&#x5B;-1]\n\n  &quot;&quot;&quot;If the model needs a tool, then call the &quot;tools&quot; node&quot;&quot;&quot;\n  if last_message.tool_calls:\n    return &quot;tools&quot;\n\n  return &quot;human_interaction&quot;\n\n&quot;&quot;&quot;\nCreate a new State graph (workflow)\n&quot;&quot;&quot;\ngraph_builder = StateGraph(MessagesState)\n\n&quot;&quot;&quot;\nAdd an agent node\n&quot;&quot;&quot;\ngraph_builder.add_node(&quot;agent&quot;, call_model)\n\n&quot;&quot;&quot;\nAdd tools node\n&quot;&quot;&quot;\ngraph_builder.add_node(&quot;tools&quot;, ToolNode(tools))\n\n&quot;&quot;&quot;\nAdd human interaction nodes (HITL)\n&quot;&quot;&quot;\ngraph_builder.add_node(&quot;human_interaction&quot;, human_interaction)\ngraph_builder.add_node(&quot;human_approved&quot;, human_approved)\ngraph_builder.add_node(&quot;human_rejected&quot;, human_rejected)\n\n&quot;&quot;&quot;\nConfigure workflow conditions (edges)\n&quot;&quot;&quot;\ngraph_builder.add_edge(START, &quot;agent&quot;)\ngraph_builder.add_conditional_edges(&quot;agent&quot;, should_continue)\ngraph_builder.add_edge(&quot;tools&quot;, &quot;agent&quot;)\n\n&quot;&quot;&quot;\n--------------------------------------------------------------------------------------------------------------------\nCompile and run\n--------------------------------------------------------------------------------------------------------------------\n&quot;&quot;&quot;\n\ngraph_config = {&quot;configurable&quot;: {&quot;thread_id&quot;: uuid.uuid4()}}\ncompiled_graph = graph_builder.compile(checkpointer=memory)\n\n&quot;&quot;&quot;\nStreaming the model output\n&quot;&quot;&quot;\nprompt = {&quot;messages&quot;: &#x5B;\n  SystemMessage(content=&quot;Provide the temperature for all cities described in `knowledge-base`. Respond in JSON format without any additional text (JSON only without markdown).&quot;)\n]}\nfor event in compiled_graph.stream(prompt, config=graph_config, stream_mode=&quot;values&quot;):\n  stream_parser(event)\n\n&quot;&quot;&quot;\nWaiting for human response (if needed)\n&quot;&quot;&quot;\nhuman_response_input = input()\nfor event in compiled_graph.stream(Command(resume=human_response_input), config=graph_config, stream_mode=&quot;updates&quot;):\n  stream_parser(event)\n\n&quot;&quot;&quot;\nVisualize your graph\n&quot;&quot;&quot;\ndisplay(Image(compiled_graph.get_graph().draw_mermaid_png())) # works only in Jupiter notebooks\n<\/pre><\/div>\n\n\n<p><\/p>\n\n\n<div class=\"kk-star-ratings kksr-auto kksr-align-left kksr-valign-bottom\"\n    data-payload='{&quot;align&quot;:&quot;left&quot;,&quot;id&quot;:&quot;31978&quot;,&quot;slug&quot;:&quot;default&quot;,&quot;valign&quot;:&quot;bottom&quot;,&quot;ignore&quot;:&quot;&quot;,&quot;reference&quot;:&quot;auto&quot;,&quot;class&quot;:&quot;&quot;,&quot;count&quot;:&quot;2&quot;,&quot;legendonly&quot;:&quot;&quot;,&quot;readonly&quot;:&quot;&quot;,&quot;score&quot;:&quot;5&quot;,&quot;starsonly&quot;:&quot;&quot;,&quot;best&quot;:&quot;5&quot;,&quot;gap&quot;:&quot;11&quot;,&quot;greet&quot;:&quot;&quot;,&quot;legend&quot;:&quot;5\\\/5 ( votes: 2)&quot;,&quot;size&quot;:&quot;18&quot;,&quot;title&quot;:&quot;The Human-in-the-loop concept: Human involvement in AI agents\\u0026#039; decision-making processes (with LangGraph)&quot;,&quot;width&quot;:&quot;139.5&quot;,&quot;_legend&quot;:&quot;{score}\\\/{best} ( {votes}: {count})&quot;,&quot;font_factor&quot;:&quot;1.25&quot;}'>\n            \n<div class=\"kksr-stars\">\n    \n<div class=\"kksr-stars-inactive\">\n            <div class=\"kksr-star\" data-star=\"1\" style=\"padding-right: 11px\">\n            \n\n<div class=\"kksr-icon\" style=\"width: 18px; height: 18px;\"><\/div>\n        <\/div>\n            <div class=\"kksr-star\" data-star=\"2\" style=\"padding-right: 11px\">\n            \n\n<div class=\"kksr-icon\" style=\"width: 18px; height: 18px;\"><\/div>\n        <\/div>\n            <div class=\"kksr-star\" data-star=\"3\" style=\"padding-right: 11px\">\n            \n\n<div class=\"kksr-icon\" style=\"width: 18px; height: 18px;\"><\/div>\n        <\/div>\n            <div class=\"kksr-star\" data-star=\"4\" style=\"padding-right: 11px\">\n            \n\n<div class=\"kksr-icon\" style=\"width: 18px; height: 18px;\"><\/div>\n        <\/div>\n            <div class=\"kksr-star\" data-star=\"5\" style=\"padding-right: 11px\">\n            \n\n<div class=\"kksr-icon\" style=\"width: 18px; height: 18px;\"><\/div>\n        <\/div>\n    <\/div>\n    \n<div class=\"kksr-stars-active\" style=\"width: 139.5px;\">\n            <div class=\"kksr-star\" style=\"padding-right: 11px\">\n            \n\n<div class=\"kksr-icon\" style=\"width: 18px; height: 18px;\"><\/div>\n        <\/div>\n            <div class=\"kksr-star\" style=\"padding-right: 11px\">\n            \n\n<div class=\"kksr-icon\" style=\"width: 18px; height: 18px;\"><\/div>\n        <\/div>\n            <div class=\"kksr-star\" style=\"padding-right: 11px\">\n            \n\n<div class=\"kksr-icon\" style=\"width: 18px; height: 18px;\"><\/div>\n        <\/div>\n            <div class=\"kksr-star\" style=\"padding-right: 11px\">\n            \n\n<div class=\"kksr-icon\" style=\"width: 18px; height: 18px;\"><\/div>\n        <\/div>\n            <div class=\"kksr-star\" style=\"padding-right: 11px\">\n            \n\n<div class=\"kksr-icon\" style=\"width: 18px; height: 18px;\"><\/div>\n        <\/div>\n    <\/div>\n<\/div>\n                \n\n<div class=\"kksr-legend\" style=\"font-size: 14.4px;\">\n            5\/5 ( votes: 2)    <\/div>\n    <\/div>\n","protected":false},"excerpt":{"rendered":"<p>Before we begin, we recommend familiarizing yourself with a few key definitions: Introduction to Human-in-the-loop The concept of Human-in-the-loop (HILP) &hellip; <a class=\"continued-btn\" href=\"https:\/\/sii.pl\/blog\/en\/the-human-in-the-loop-concept-human-involvement-in-ai-agents-decision-making-processes-with-langgraph\/\">Continued<\/a><\/p>\n","protected":false},"author":735,"featured_media":32026,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"_editorskit_title_hidden":false,"_editorskit_reading_time":0,"_editorskit_is_block_options_detached":false,"_editorskit_block_options_position":"{}","inline_featured_image":false,"footnotes":""},"categories":[1320],"tags":[2865,2866,2622,1526,1442,1458],"class_list":["post-31978","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-hard-development","tag-langgraph-en","tag-human-in-the-loop-en","tag-digital-en","tag-guidebook","tag-ai-en","tag-python-en"],"acf":[],"aioseo_notices":[],"republish_history":[],"featured_media_url":"https:\/\/sii.pl\/blog\/wp-content\/uploads\/2025\/08\/Code-1.jpg","category_names":["Hard development"],"_links":{"self":[{"href":"https:\/\/sii.pl\/blog\/en\/wp-json\/wp\/v2\/posts\/31978"}],"collection":[{"href":"https:\/\/sii.pl\/blog\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/sii.pl\/blog\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/sii.pl\/blog\/en\/wp-json\/wp\/v2\/users\/735"}],"replies":[{"embeddable":true,"href":"https:\/\/sii.pl\/blog\/en\/wp-json\/wp\/v2\/comments?post=31978"}],"version-history":[{"count":3,"href":"https:\/\/sii.pl\/blog\/en\/wp-json\/wp\/v2\/posts\/31978\/revisions"}],"predecessor-version":[{"id":32053,"href":"https:\/\/sii.pl\/blog\/en\/wp-json\/wp\/v2\/posts\/31978\/revisions\/32053"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/sii.pl\/blog\/en\/wp-json\/wp\/v2\/media\/32026"}],"wp:attachment":[{"href":"https:\/\/sii.pl\/blog\/en\/wp-json\/wp\/v2\/media?parent=31978"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/sii.pl\/blog\/en\/wp-json\/wp\/v2\/categories?post=31978"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/sii.pl\/blog\/en\/wp-json\/wp\/v2\/tags?post=31978"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}