Releases: neuron-core/neuron-ai
2.0.2
Merge branch 'main' into 2.x
2.0.1
Merge remote-tracking branch 'origin/2.x' into 2.x
2.0.0
Muli-Agent Orchestration Finally Possible In PHP
The core advancement in v2 centers around a complete rearchitecture of the Workflow system. In v1 Workflow was marked as experimental. The previous implementation relied on a combination of nodes and edges to define the path workflow must follow to navigate through nodes. V2 introduces an event-driven model that uses only nodes as entities that handle incoming events and emit other events.
This shift eliminates the Edge class entirely and opens up possibilities that weren’t feasible with the graph-like workflows. Nodes now trigger and respond to events, creating dynamic execution paths based on runtime conditions rather than predetermined sequences.
- Real-Time Streaming in multi-agent systems
- Human-in-the-Loop Without Complexity
Getting started with Workflow: https://docs.neuron-ai.dev/workflow/getting-started
Neuron CLI – Enhanced Developer Tools
V2 ships with practical developer experience improvements that address common friction points. The new CLI tool brings the new “make” command that helps you scaffold common classes reducing boilerplate fatigue:
php vendor/bin/neuron make:agent App\\Neuron\\MyAgent
php vendor/bin/neuron make:rag App\\Neuron\\MyChatBot
php vendor/bin/neuron make:workflow App\\Neuron\\MyWorkflow
RAG Retrieval component
In the previous version the RAG component could interact with a vector store and an embeddings provider, but there was no way to customize this behavior. Recently many different retrieval techniques emerged trying to increase the accuracy of a RAG system.
NeuronAI RAG now has a separate retrieval component that allwos you to implement different strategies to accomplish context retrieval from an external data source. By default RAG uses SimilarityRetrieval that simply replicate the previous behaviour maintaining backward compatibility. But it depends now by its own interface so you can create custom implementation and inject it into the RAG.
Evaluators
When building AI agents, evaluating their performance is crucial during this process. It's important to consider various qualitative and quantitative factors, including response quality, task completion, success, and inaccuracies or hallucinations.
Neuron introduces a system to create evaluators against test cases, so you can continues verify the output of your agentic entities overtime.
Tool Max Tries
Agents now have a safety mechanism that tracks the number of times a tool is invoked during an execution session. If the agent exceeds this limit, execution is interrupted and an exception is thrown. By default the limit is 5 calls, and it count for each tool individually. You can customize this value with the toolMaxTries method.
try {
$result = YouTubeAgent::make()
->toolMaxTries(5) // Max number of calls for each tool
->chat(...);
} catch (ToolMaxTriesException $exception) {
// do something
}New Output validation rules
We introduced two new validation rules for structured output: WordsCount (works on string), and InRange (works on numeric).
Upgrade guide
Read the upgrade guide to better understand the new version: https://docs.neuron-ai.dev/overview/readme/upgrade-guide
1.17.5
Improvements from #275
String-based node keys to the Workflow system
- String-based node keys: Use descriptive string keys like 'add1', 'multiply_first' instead of class names
- Multiple node instances: Instantiate the same node class multiple times with different parameters
- Full backward compatibility: Existing workflows using class names continue to work without modification
- Smart detection: Automatically detects whether nodes use string keys or class names
- Enhanced Mermaid export: Diagram generation handles both string keys and class names
Example
<?php
use NeuronAI\Workflow\Edge;
use NeuronAI\Workflow\Node;
use NeuronAI\Workflow\Workflow;
use NeuronAI\Workflow\WorkflowState;
// Reusable node classes
class AddNode extends Node
{
public function __construct(private int $value) {}
public function run(WorkflowState $state): WorkflowState
{
$current = $state->get('value', 0);
$state->set('value', $current + $this->value);
return $state;
}
}
class MultiplyNode extends Node
{
public function __construct(private int $value) {}
public function run(WorkflowState $state): WorkflowState
{
$current = $state->get('value', 0);
$state->set('value', $current * $this->value);
return $state;
}
}
class SubtractNode extends Node
{
public function __construct(private int $value) {}
public function run(WorkflowState $state): WorkflowState
{
$current = $state->get('value', 0);
$state->set('value', $current - $this->value);
return $state;
}
}
// Workflow that calculates: ((value + 1) * 3) * 3) - 1
class CalculatorWorkflow extends Workflow
{
public function nodes(): array
{
return [
'add1' => new AddNode(1),
'multiply3_first' => new MultiplyNode(3),
'multiply3_second' => new MultiplyNode(3), // Same class, different instance!
'sub1' => new SubtractNode(1),
'finish' => new FinishNode()
];
}
public function edges(): array
{
return [
new Edge('add1', 'multiply3_first'),
new Edge('multiply3_first', 'multiply3_second'),
new Edge('multiply3_second', 'sub1'),
new Edge('sub1', 'finish')
];
}
protected function start(): string
{
return 'add1';
}
protected function end(): array
{
return ['finish'];
}
}1.17.4
fix typesense document ID
1.17.3
OpenAILike Provider
Easily connect with providers offering the same data format of the official OpenAI API, without implementing custom class in your application.
namespace App\Neuron;
use NeuronAI\Agent;
use NeuronAI\Chat\Messages\UserMessage;
use NeuronAI\Providers\AIProviderInterface;
use NeuronAI\Providers\HttpClientOptions;
use NeuronAI\Providers\OpenAILike;
class MyAgent extends Agent
{
public function provider(): AIProviderInterface
{
return new OpenAILike(
baseUri: 'https://api.together.xyz/v1',
key: 'API_KEY',
model: 'MODEL',
);
}
}
echo MyAgent::make()->chat(new UserMessage("Hi!"));
// Hi, how can I help you today?
Check out the documentaiton.
1.17.2
AWS Bedrock provider
Connect your AI Agents to AWS Bedrock runtime to use LLMs in your private cloud. Along with AzureOpenAI, and HuggingFace providers, it makes NeuronAI the PHP AI framework with the broadest support for inference infrastructures.
namespace App\Neuron;
use Aws\BedrockRuntime\BedrockRuntimeClient;
use NeuronAI\Agent;
use NeuronAI\Chat\Messages\UserMessage;
use NeuronAI\Providers\AIProviderInterface;
use NeuronAI\Providers\AWS\BedrockRuntime;
class MyAgent extends Agent
{
public function provider(): AIProviderInterface
{
$client = new BedrockAgentRuntimeClient([
'version' => 'latest',
'region' => 'us-east-1',
'credentials' => [
'key' => 'AWS_BEDROCK_KEY',
'secret' => 'AWS_BEDROCK_SECRET',
],
]);
return new BedrockRuntime(
client: $client,
model: 'AWS_BEDROCK_MODEL',
);
}
}
echo MyAgent::make()->chat(new UserMessage("Hi!"));
// Hi, how can I help you today?
1.17.1
Stream Tool Call
Neuron Agent now stream not only the LLM response but also tool calls. You can implement a UI feedback about the ongoing tool call behind the scenes:
use App\Neuron\MyAgent;
use NeuronAI\Chat\Messages\UserMessage;
use NeuronAI\Tools\Tool;
$stream = MyAgent::make()
->addTool(
Tool::make(
'get_server_configuration',
'retrieve the server network configuration'
)->addProperty(...)->setCallable(...)
)
->stream(
new UserMessage("What's the IP address of the server?")
);
// Iterate chunks
foreach ($stream as $chunk) {
if ($chunk instanceof ToolCallMessage) {
// Output the ongoing tool call
echo PHP_EOL.\array_reduce(
$chunk->getTools(),
fn(string $carry, ToolInterface $tool)
=> $carry .= '- Calling tool: '.$tool->getName().PHP_EOL,
'');
} else {
echo $chunk;
}
}
// Let me retrieve the server configuration.
// - Calling tool: get_server_configuration
// The IP address of the server is: 192.168.0.101.17.0
Calendar Toolkit
1.16.23
ensure to call stdio disconnect function