Large Language Models (LLMs) have evolved from text generators into powerful reasoning engines capable of assisting with specialized technical tasks. When integrated with external tools, they can move beyond conversational assistants to perform dynamic, context-aware reasoning, and even engineering design — an approach often referred to as tool-augmented AI agents. These agents can query data, apply formulas, or make design decisions autonomously, opening up new possibilities for intelligent assistants in domains like engineering, finance, or scientific research.
In this project, we will developed an LLM-based electronics expert agent using the LangChain framework and the OpenAI API. This is a simple agent that can design RC circuits, meeting specified electrical and timing constraints for microcontroller inputs by combining reasoning capabilities with domain-specific calculation tools and design rules. This kind of agent demonstrates how AI can assist engineers in early-stage design, verification, and education — automating routine tasks while maintaining explainability. Similar architectures are already emerging in industry for circuit design, simulation workflows, and predictive maintenance, showing how LLMs can act as flexible problem-solvers across engineering disciplines.
I will present the code and explain everything hereafter. The complete Jupyter Notebook is available on my GitHub.
The first step is to import the needed libraries
# OpenAI API to communicate with the LLM model:
from langchain_openai import ChatOpenAI
# to create a langchain agent:
from langchain.agents import create_agent
# a decorator to create tools, that can be simply functions that the LLM can use:
from langchain.tools import tool
# to add memory to the chats, so the LLM can remember previous messages
from langgraph.checkpoint.memory import InMemorySaverLets define the tools now. In this projects, tools are python functions with @tools decorator. It is important to write a function docstrings (the description paragraph in the beginning of the function definition) which is complete and clear. This is because the LLM will see these docstrings and use them as the guide on what each tool can do, what are its inputs and what are its outputs.
The first tool is a function that is supposed to get the MCU chip name and searches through available documents to find two parameters that are important in MCU ADC filter design. This needs RAG (Retrieval-Augmented Generation), which I did not implement. Here I just simply return some values. I made specific values for TI C2000 chips, and different ones for any other chips, just to make sure the LLM is correctly using the tool.
@tool
def get_mcu_info(chip_name: str) -> dict:
"""returns important MCU parameters: i_mcu: ADC input current of the MCU, c_mcu: ADC sampling capacitor of the MCU"""
if chip_name == "TI_C2000":
i = 1e-6
c = 1e-6
else:
i = 10e-6
c = 20e-6
return {"i_mcu" : i, "c_mcu": c}I defined three other tools, that the model must run one after each other. It must get the outputs of one and feed it to the next step. So I divided the whole process into multiple steps to easily evaluate this process.
@tool
def get_R_filter(i_mcu: float, c_mcu: float, sampling_frequency: float, max_voltage_drop: float) -> float:
"""Calculate the filter resistance"""
Rmax = 1 / 4 / sampling_frequency / c_mcu
Rmax = min(Rmax, max_voltage_drop / i_mcu)
Rmin = 1000
R_filter = (Rmax + Rmin) / 2
return round(R_filter,2)
@tool
def get_cutoff_frequency(R: float, C: float) -> float:
"""Calculates the cutoff frequency based on R and C input values"""
return round(1 / 2 / 3.14 / R / C,9)
@tool
def get_new_C(C: float, cutoff_frequency: float, desired_frequency: float) -> float:
"""Compare cutoff frequency with desired cutoff frequency and give a new value for C to help reach the desired cutoff frequency value"""
new_C = C * (1 + (cutoff_frequency / desired_frequency - 1) / 2)
return round(new_C,9)Before defining our model and agent, we need to think about the system prompt. The system prompt is a hidden instruction given to an AI model before any user input — it sets the model’s behavior, tone, and role. This is very important here because the LLM needs it to work correctly with the agent. You will understand better when you read the prompt:
SYSTEM_PROMPT = """You are a microcontroller (MCU) expert in designing an RC filter by choosing final values for: "fitler resistance R_filter" and "filter capacitance C_filter". You have access to these tools: get_mcu_info, get_R_filter, get_cutoff_frequency, get_new_C
1) Initially, make sure you get the following four parameters from the user: MCU chip name, sampling frequency, maximum voltage drop and the desired cutoff frequency. If any of these 4 are missing, ask the user to provide in the next message.
2) Get the mcu parameters. Then determine the filter resistance. Now you can get the cutoff frequency, assuming initial filter capacitance of C_filter = 100e-9.
3) If the cutoff frequency you get is more than 5% different from the frequency asked by the user, use the get_new_C tool to get a new value for C and call the get_cutoff_frequency again.
Repeat (3) until you reach closer than 5% from the desired cutoff frequency"""Now is the time to define our LLM model and let it know that we have some tools it can use. Here you must replace the API key, URL and model name with correct information from your LLM API. In this project, I am using free Polaris Alpha LLM API from www.openrouter.ai platform. But before that, we need to define
# a schema where the model can return the exact parameters, in addition to the text response
@dataclass
class ResponseFormat:
"""Response schema for the agent."""
punny_response: str
final_cutoff_frequency: str | None = None
R_filter: str | None = None
final_C_filter: str | None = None
# a context to store
@dataclass
class Context:
"""Custom runtime context schema."""
user_id: str
# here we define our agent:
agent = create_agent(
model=model,
system_prompt=SYSTEM_PROMPT,
tools=[get_mcu_info, get_R_filter, get_cutoff_frequency, get_new_C], # tools are introduced here
context_schema=Context,
response_format=ResponseFormat,
checkpointer=InMemorySaver()
)Now everything is ready, we can use agent.invoke() method to send a query:
query = "I want to design an RC filter to use with MCU chip named 'TI_C2000'. I want the cutoff frequency to be at 4000 Hz and maximum voltage drop of 0.1 Volts. The sampling rate is 10000 Hz."
response = agent.invoke(
{"messages": [{"role": "user", "content": query}]},
config={"configurable": {"thread_id": "1"}},
context=Context(user_id="1")
)
print(response['structured_response'])ResponseFormat(punny_response="You're filtered for success! Based on your specs, here are your RC values:", final_cutoff_frequency='3883.80 Hz', R_filter='512.5 Ω', final_C_filter='80 nF')The agent has done its job quite well. But lets see what does the response include in total:
response{'messages': [HumanMessage(content="I want to design an RC filter to use with MCU chip named 'TI_C2000'. I want the cutoff frequency to be at 4000 Hz and maximum voltage drop of 0.1 Volts. The sampling rate is 10000 Hz.", additional_kwargs={}, response_metadata={}, id='033be0ee-8c1c-4853-85e2-910aaefbe0a7'),
AIMessage(content='', additional_kwargs={'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 29, 'prompt_tokens': 596, 'total_tokens': 625, 'completion_tokens_details': {'accepted_prediction_tokens': None, 'audio_tokens': None, 'reasoning_tokens': 0, 'rejected_prediction_tokens': None}, 'prompt_tokens_details': {'audio_tokens': None, 'cached_tokens': 0}}, 'model_provider': 'openai', 'model_name': 'openrouter/polaris-alpha', 'system_fingerprint': None, 'id': 'gen-1762894061-EBGlSCLAZ1uXQ6yIBZDg', 'finish_reason': 'tool_calls', 'logprobs': None}, id='lc_run--9c8d2c0b-8513-41c8-b6be-ba72cd22d197-0', tool_calls=[{'name': 'get_mcu_info', 'args': {'chip_name': 'TI_C2000'}, 'id': 'ddb5bf9f-ffaa-4136-ab2d-d86cb75b1aa3', 'type': 'tool_call'}], usage_metadata={'input_tokens': 596, 'output_tokens': 29, 'total_tokens': 625, 'input_token_details': {'cache_read': 0}, 'output_token_details': {'reasoning': 0}}),
ToolMessage(content='{"i_mcu": 1e-06, "c_mcu": 1e-06}', name='get_mcu_info', id='dbc908ef-35e7-451e-a516-b1b6207e640c', tool_call_id='ddb5bf9f-ffaa-4136-ab2d-d86cb75b1aa3'),
AIMessage(content='', additional_kwargs={'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 46, 'prompt_tokens': 654, 'total_tokens': 700, 'completion_tokens_details': {'accepted_prediction_tokens': None, 'audio_tokens': None, 'reasoning_tokens': 0, 'rejected_prediction_tokens': None}, 'prompt_tokens_details': {'audio_tokens': None, 'cached_tokens': 0}}, 'model_provider': 'openai', 'model_name': 'openrouter/polaris-alpha', 'system_fingerprint': None, 'id': 'gen-1762894062-FVSWHY1WaTRlDlbXXg2L', 'finish_reason': 'tool_calls', 'logprobs': None}, id='lc_run--fb665f85-07c8-44f9-a1a7-813eea1a0b4e-0', tool_calls=[{'name': 'get_R_filter', 'args': {'i_mcu': 1e-06, 'c_mcu': 1e-06, 'sampling_frequency': 10000, 'max_voltage_drop': 0.1}, 'id': 'b68d8448-f1a2-47f5-bf39-4d74ef5caa37', 'type': 'tool_call'}], usage_metadata={'input_tokens': 654, 'output_tokens': 46, 'total_tokens': 700, 'input_token_details': {'cache_read': 0}, 'output_token_details': {'reasoning': 0}}),
ToolMessage(content='512.5', name='get_R_filter', id='be16d0de-4b85-493a-b883-831572b53336', tool_call_id='b68d8448-f1a2-47f5-bf39-4d74ef5caa37'),
AIMessage(content='', additional_kwargs={'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 29, 'prompt_tokens': 715, 'total_tokens': 744, 'completion_tokens_details': {'accepted_prediction_tokens': None, 'audio_tokens': None, 'reasoning_tokens': 0, 'rejected_prediction_tokens': None}, 'prompt_tokens_details': {'audio_tokens': None, 'cached_tokens': 0}}, 'model_provider': 'openai', 'model_name': 'openrouter/polaris-alpha', 'system_fingerprint': None, 'id': 'gen-1762894064-ratSk4t6znzLnrw5RaMo', 'finish_reason': 'tool_calls', 'logprobs': None}, id='lc_run--8ab250b2-7a57-485e-9282-059ea3cc1765-0', tool_calls=[{'name': 'get_cutoff_frequency', 'args': {'R': 512.5, 'C': 1e-07}, 'id': '8180a085-9f19-4874-947c-c4985834a972', 'type': 'tool_call'}], usage_metadata={'input_tokens': 715, 'output_tokens': 29, 'total_tokens': 744, 'input_token_details': {'cache_read': 0}, 'output_token_details': {'reasoning': 0}}),
ToolMessage(content='3107.037439801', name='get_cutoff_frequency', id='b8110a05-0545-4672-b4fd-b64aaca4cdaa', tool_call_id='8180a085-9f19-4874-947c-c4985834a972'),
...
AIMessage(content='', additional_kwargs={'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 62, 'prompt_tokens': 1077, 'total_tokens': 1139, 'completion_tokens_details': {'accepted_prediction_tokens': None, 'audio_tokens': None, 'reasoning_tokens': 0, 'rejected_prediction_tokens': None}, 'prompt_tokens_details': {'audio_tokens': None, 'cached_tokens': 0}}, 'model_provider': 'openai', 'model_name': 'openrouter/polaris-alpha', 'system_fingerprint': None, 'id': 'gen-1762894077-wpiMVq30zUMLdcxLsS0m', 'finish_reason': 'tool_calls', 'logprobs': None}, id='lc_run--28b7234f-d6d9-4be2-a804-cc94b9ec1e7f-0', tool_calls=[{'name': 'ResponseFormat', 'args': {'punny_response': 'Here are your RC filter values—cutoff the worries, not the signal!', 'final_cutoff_frequency': '3883.80 Hz', 'R_filter': '512.5 Ohms', 'final_C_filter': '80 nF'}, 'id': '5be2dcf9-eafa-4b0d-b05b-6f55b81f3bf6', 'type': 'tool_call'}], usage_metadata={'input_tokens': 1077, 'output_tokens': 62, 'total_tokens': 1139, 'input_token_details': {'cache_read': 0}, 'output_token_details': {'reasoning': 0}}),
ToolMessage(content="Returning structured response: ResponseFormat(punny_response='Here are your RC filter values—cutoff the worries, not the signal!', final_cutoff_frequency='3883.80 Hz', R_filter='512.5 Ohms', final_C_filter='80 nF')", name='ResponseFormat', id='dbf8c728-b5db-4e2b-b837-b4a0a520f6dc', tool_call_id='5be2dcf9-eafa-4b0d-b05b-6f55b81f3bf6')],
'structured_response': ResponseFormat(punny_response='Here are your RC filter values—cutoff the worries, not the signal!', final_cutoff_frequency='3883.80 Hz', R_filter='512.5 Ohms', final_C_filter='80 nF')}Here we clearly see the iterations that were running in the background. I created a loop that prints the iterations of defining a new capacitor value and then re-calculate the cutoff frequency:
from langchain_core.messages import AIMessage
n_steps = 0
for i,v in enumerate(response["messages"]):
if isinstance(v, AIMessage):
n_steps +=1
print(f"{n_steps}) {v.tool_calls[0]["name"]}({v.tool_calls[0]["args"]}) = {response["messages"][i+1].content}\n")
print(f"The model ran {(n_steps - 2) / 2} iterations to reach the final result!")1) get_mcu_info({'chip_name': 'TI_C2000'}) = {"i_mcu": 1e-06, "c_mcu": 1e-06}
2) get_R_filter({'i_mcu': 1e-06, 'c_mcu': 1e-06, 'sampling_frequency': 10000, 'max_voltage_drop': 0.1}) = 512.5
3) get_cutoff_frequency({'R': 512.5, 'C': 1e-07}) = 3107.037439801
4) get_new_C({'C': 1e-07, 'cutoff_frequency': 3107.037439801, 'desired_frequency': 4000}) = 8.9e-08
5) get_cutoff_frequency({'R': 512.5, 'C': 8.9e-08}) = 3491.053303147
6) get_new_C({'C': 8.9e-08, 'cutoff_frequency': 3491.053303147, 'desired_frequency': 4000}) = 8.3e-08
7) get_cutoff_frequency({'R': 512.5, 'C': 8.3e-08}) = 3743.41860217
8) get_new_C({'C': 8.3e-08, 'cutoff_frequency': 3743.41860217, 'desired_frequency': 4000}) = 8e-08
9) get_cutoff_frequency({'R': 512.5, 'C': 8e-08}) = 3883.796799751
10) ResponseFormat({'punny_response': 'Here are your RC filter values—cutoff the worries, not the signal!', 'final_cutoff_frequency': '3883.80 Hz', 'R_filter': '512.5 Ohms', 'final_C_filter': '80 nF'}) = Returning structured response: ResponseFormat(punny_response='Here are your RC filter values—cutoff the worries, not the signal!', final_cutoff_frequency='3883.80 Hz', R_filter='512.5 Ohms', final_C_filter='80 nF')
The model ran 4.0 iterations to reach the final result!The output of the loop shows we had 4 iterations before reaching the final value for C.
Now let’s check if the LLM closely follows the instructions, and if the agent works well in memorizing the user and chat thread. In the System Prompt we have indicated that the user must provide all the 4 needed input parameters. If any of them is missing, ask the user to provide them. In this step, I will send a new query but do not indicate the sampling frequency. I will also change the “thread_id”, so the agent treats this query and a new chat and does not reuse the previous parameters.
query = "I want to design an RC filter to use with MCU chip named 'TI_C2000'. I want the cutoff frequency to be at 4000 Hz and maximum voltage drop of 0.1 Volts."
response = agent.invoke(
{"messages": [{"role": "user", "content": query}]},
config={"configurable": {"thread_id": "2"}},
context=Context(user_id="1")
)
print(response['structured_response'])ResponseFormat(punny_response='Got it—let’s engineer a “cap-tivating” filter for your TI_C2000. First I need one more detail: what is your ADC sampling frequency (in samples per second / Hz)?', final_cutoff_frequency=None, R_filter=None, final_C_filter=None)The LLM has correctly detected that and asks for this parameter. Let’s send it in the next message:
query = "The sampling frequency is 50 kHz"
response = agent.invoke(
{"messages": [{"role": "user", "content": query}]},
config={"configurable": {"thread_id": "2"}},
context=Context(user_id="1")
)
print(response['structured_response'])ResponseFormat(punny_response='Your filter is now tuned closer than a dad joke to a groan.', final_cutoff_frequency='3728.08 Hz', R_filter='502.5 Ohms', final_C_filter='85 nF')We successfully coded a LLM-based AI agent. These agents are capable of communicating in natural language with the user, and employ tools and programs, to perform advanced engineering tasks. The Python functions that we used in this project, were simple examples. Advanced agents can use simulation tools, gather information from documents or internet and perform advanced solution finding algorithms.