Enterprise AI Development Platform with built-in tools, universal provider support, and factory pattern architecture. Production-ready with TypeScript support.
NeuroLink is an Enterprise AI Development Platform that unifies 9 major AI providers with intelligent fallback and built-in tool support. Available as both a programmatic SDK and professional CLI tool. Features 6 core tools working across all providers plus SDK custom tool registration. Extracted from production use at Juspay.
- π Factory Pattern Architecture - Unified provider management through BaseProvider inheritance
- π§ Tools-First Design - All providers include built-in tool support without additional configuration
- π Real-time WebSocket Infrastructure - [Coming Soon - Broken in migration, being fixed]
- π Advanced Telemetry - [Coming Soon - Broken in migration, being fixed]
- π¬ Enhanced Chat Services - [Coming Soon - Broken in migration, being fixed]
- ποΈ Enterprise Architecture - Production-ready with clean abstractions
- π Configuration Management - Flexible provider configuration
- β Type Safety - Industry-standard TypeScript interfaces
- β‘ Performance - Fast response times with streaming support
- π‘οΈ Error Recovery - Graceful failures with provider fallback
NeuroLink now features a unified factory pattern architecture with automatic tool support for all providers.
- β Unified Architecture: All providers inherit from BaseProvider with built-in tool support
- β Direct Tools: Six core tools available across all providers (getCurrentTime, readFile, listDirectory, calculateMath, writeFile, searchFiles)
- β Simplified Providers: Removed duplicate code - providers now focus only on model-specific logic
- β Better Testing: 78% of providers fully working with tools (7/9 providers), 22% partial support
- β Zero Breaking Changes: All existing code continues working (backward compatibility)
- β SDK Custom Tools: Register your own tools programmatically with the SDK
Factory Pattern: NeuroLink uses BaseProvider inheritance to provide consistent tool support across all AI providers without code duplication.
NeuroLink uses stream()
as the primary streaming function with future-ready multi-modal interface.
- β
New Primary Streaming:
stream()
with multi-modal ready interface - β
Enhanced Generation:
generate()
as primary generation function - β Factory Enhanced: Better provider management across all methods
- β Zero Breaking Changes: All existing code continues working (backward compatibility)
Enhanced API: NeuroLink uses
stream()
andgenerate()
as primary functions with multi-modal ready interfaces and improved factory patterns.
# Quick setup with Google AI Studio (free tier available)
export GOOGLE_AI_API_KEY="AIza-your-google-ai-api-key"
# CLI - No installation required
npx @juspay/neurolink generate "Hello, AI"
npx @juspay/neurolink gen "Hello, AI" # Shortest form
# β¨ Primary Method (generate) - Recommended
npx @juspay/neurolink generate "Explain AI" --provider google-ai
npx @juspay/neurolink gen "Write code" --provider openai # Shortest form
# π AI Enhancement Features
npx @juspay/neurolink generate "Explain AI" --enable-analytics --debug
npx @juspay/neurolink generate "Write code" --enable-evaluation --debug
npx @juspay/neurolink generate "Help me" --context '{"userId":"123"}' --debug
npx @juspay/neurolink status
# SDK Installation for using in your typescript projects
npm install @juspay/neurolink
import { NeuroLink } from "@juspay/neurolink";
// NEW: Primary method (recommended)
const neurolink = new NeuroLink();
const result = await neurolink.generate({
input: { text: "Write a haiku about programming" },
provider: "google-ai",
timeout: "30s", // Optional: Set custom timeout (default: 30s)
});
// Alternative: Auto-selects best available provider
import { createBestAIProvider } from "@juspay/neurolink";
const provider = createBestAIProvider();
const providerResult = await provider.generate({
input: { text: "Write a haiku about programming" },
timeout: "30s",
});
console.log(result.content);
console.log(`Used: ${result.provider}`);
Method aliases that match CLI command names:
// All three methods are equivalent:
const result1 = await provider.generate({ input: { text: "Hello" } }); // Original
const result2 = await provider.generate({ input: { text: "Hello" } }); // Matches CLI 'generate'
const result3 = await provider.gen({ input: { text: "Hello" } }); // Matches CLI 'gen'
// Use whichever style you prefer:
const provider = createBestAIProvider();
// Detailed method name
const story = await provider.generate({
input: { text: "Write a short story about AI" },
maxTokens: 200,
});
// CLI-style method names
const poem = await provider.generate({ input: { text: "Write a poem" } });
const joke = await provider.gen({ input: { text: "Tell me a joke" } });
# Basic AI generation
npx @juspay/neurolink generate "Write a business email"
# With analytics and evaluation (NEW!)
npx @juspay/neurolink generate "Write a business email" --enable-analytics --enable-evaluation --debug
# See detailed usage data:
# π Analytics: Provider usage, token costs, response times
# β Response Evaluation: AI-powered quality scores
# With custom context
npx @juspay/neurolink generate "Create a proposal" --context '{"company":"TechCorp"}' --debug
import { NeuroLink } from "@juspay/neurolink";
const neurolink = new NeuroLink();
// Basic usage
const result = await neurolink.generate({ input: { text: "Write a story" } });
// With enhancements (NEW!)
const enhancedResult = await neurolink.generate({
input: { text: "Write a business proposal" },
enableAnalytics: true, // Get usage & cost data
enableEvaluation: true, // Get AI quality scores
context: { project: "Q1-sales" }, // Custom context
});
// Access enhancement data
console.log("π Usage:", enhancedResult.analytics);
console.log("β Quality:", enhancedResult.evaluation);
console.log("Response:", enhancedResult.content);
// Enhanced evaluation included when enableEvaluation is true
// Returns basic quality scores for the generated content
import {
createEnhancedChatService,
NeuroLinkWebSocketServer,
} from "@juspay/neurolink";
// Enhanced chat with WebSocket support
const chatService = createEnhancedChatService({
provider: await createBestAIProvider(),
enableWebSocket: true,
enableSSE: true,
streamingConfig: {
bufferSize: 8192,
compressionEnabled: true,
},
});
// WebSocket server for real-time applications
const wsServer = new NeuroLinkWebSocketServer({
port: 8080,
maxConnections: 1000,
enableCompression: true,
});
// Handle real-time chat
wsServer.on("chat-message", async ({ connectionId, message }) => {
await chatService.streamChat({
prompt: message.data.prompt,
onChunk: (chunk) => {
wsServer.sendMessage(connectionId, {
type: "ai-chunk",
data: { chunk },
});
},
});
});
import { initializeTelemetry, getTelemetryStatus } from "@juspay/neurolink";
// Optional enterprise monitoring (zero overhead when disabled)
const telemetry = initializeTelemetry({
serviceName: "my-ai-app",
endpoint: "http://localhost:4318",
enableTracing: true,
enableMetrics: true,
enableLogs: true,
});
// Check telemetry status
const status = await getTelemetryStatus();
console.log("Telemetry enabled:", status.enabled);
console.log("Service:", status.service);
console.log("Version:", status.version);
// All AI operations are now automatically monitored
const provider = await createBestAIProvider();
const result = await provider.generate({
prompt: "Generate business report",
});
// Telemetry automatically tracks: response time, token usage, cost, errors
# Create .env file (automatically loaded by CLI)
echo 'OPENAI_API_KEY="sk-your-openai-key"' > .env
echo 'GOOGLE_AI_API_KEY="AIza-your-google-ai-key"' >> .env
echo 'AWS_ACCESS_KEY_ID="your-aws-access-key"' >> .env
# Test configuration
npx @juspay/neurolink status
π Complete Setup Guide - All providers with detailed instructions
- π Factory Pattern Architecture - Unified provider management with BaseProvider inheritance
- π§ Tools-First Design - All providers automatically include direct tool support (getCurrentTime, readFile, listDirectory, calculateMath, writeFile, searchFiles)
- π 9 AI Providers - OpenAI, Bedrock, Vertex AI, Google AI Studio, Anthropic, Azure, Hugging Face, Ollama, Mistral AI
- β‘ Dynamic Model System - Self-updating model configurations without code changes
- π° Cost Optimization - Automatic selection of cheapest models for tasks
- π Smart Model Resolution - Fuzzy matching, aliases, and capability-based search
- β‘ Automatic Fallback - Never fail when providers are down
- π₯οΈ CLI + SDK - Use from command line or integrate programmatically
- π‘οΈ Production Ready - TypeScript, error handling, extracted from production
- β MCP Integration - Model Context Protocol with working built-in tools and 58+ external servers
- π MCP Auto-Discovery - Zero-config discovery across VS Code, Claude, Cursor, Windsurf
- βοΈ Built-in Tools - Time, date calculations, and number formatting ready to use
- π€ AI Analysis Tools - Built-in optimization and workflow assistance
- π Local AI Support - Run completely offline with Ollama
- π Open Source Models - Access 100,000+ models via Hugging Face
- πͺπΊ GDPR Compliance - European data processing with Mistral AI
Component | Status | Description |
---|---|---|
Built-in Tools | β Working | 6 core tools fully functional across all providers |
SDK Custom Tools | β Working | Register custom tools programmatically |
External Discovery | π Discovery | 58+ MCP servers discovered from AI tools ecosystem |
Tool Execution | β Working | Real-time AI tool calling with built-in tools |
External Tools | π§ Development | Manual config needs one-line fix, activation in progress |
CLI Integration | β READY | Production-ready with built-in tools |
External Activation | π§ Development | Discovery complete, activation protocol in progress |
# Test built-in tools (works immediately)
npx @juspay/neurolink generate "What time is it?" --debug
# Disable tools for pure text generation
npx @juspay/neurolink generate "Write a poem" --disable-tools
# Discover available MCP servers
npx @juspay/neurolink mcp discover --format table
Register your own tools programmatically with the SDK:
import { NeuroLink } from "@juspay/neurolink";
const neurolink = new NeuroLink();
// Register a simple tool
neurolink.registerTool("weatherLookup", {
description: "Get current weather for a city",
parameters: z.object({
city: z.string().describe("City name"),
units: z.enum(["celsius", "fahrenheit"]).optional(),
}),
execute: async ({ city, units = "celsius" }) => {
// Your implementation here
return {
city,
temperature: 22,
units,
condition: "sunny",
};
},
});
// Use it in generation
const result = await neurolink.generate({
input: { text: "What's the weather in London?" },
provider: "google-ai",
});
// Register multiple tools at once
neurolink.registerTools({
stockPrice: {
/* tool definition */
},
calculator: {
/* tool definition */
},
});
NeuroLink now features a revolutionary dynamic model configuration system that eliminates hardcoded model lists and enables automatic cost optimization.
- π Self-Updating: New models automatically available without code updates
- π° Cost-Optimized: Automatic selection of cheapest models for tasks
- π Smart Search: Find models by capabilities (function-calling, vision, etc.)
- π·οΈ Alias Support: Use friendly names like "claude-latest" or "best-coding"
- π Real-Time Pricing: Always current model costs and performance data
# Cost optimization - automatically use cheapest model
npx @juspay/neurolink generate "Hello" --optimize-cost
# Capability search - find models with specific features
npx @juspay/neurolink generate "Describe this image" --capability vision
# Model aliases - use friendly names
npx @juspay/neurolink gen "Write code" --model best-coding
# Test dynamic model server
npm run model-server # Starts config server on localhost:3001
npm run test:dynamic-models # Comprehensive test suite
- 10 active models across 4 providers
- Cheapest: Gemini 2.0 Flash ($0.000075/1K tokens)
- Most capable: Claude 3 Opus (function-calling + vision + analysis)
- Best for coding: Claude 3 Opus, Gemini 2.0 Flash
- 1 deprecated model automatically excluded
π Complete Dynamic Models Guide - Setup, configuration, and advanced usage
# Text generation with automatic MCP tool detection (default)
npx @juspay/neurolink generate "What time is it?"
# Alternative short form
npx @juspay/neurolink gen "What time is it?"
# Disable tools for training-data-only responses
npx @juspay/neurolink generate "What time is it?" --disable-tools
# With custom timeout for complex prompts
npx @juspay/neurolink generate "Explain quantum computing in detail" --timeout 1m
# Real-time streaming with agent support (default)
npx @juspay/neurolink stream "What time is it?"
# Streaming without tools (traditional mode)
npx @juspay/neurolink stream "Tell me a story" --disable-tools
# Streaming with extended timeout
npx @juspay/neurolink stream "Write a long story" --timeout 5m
# Provider diagnostics
npx @juspay/neurolink status --verbose
# Batch processing
echo -e "Write a haiku\nExplain gravity" > prompts.txt
npx @juspay/neurolink batch prompts.txt --output results.json
# Batch with custom timeout per request
npx @juspay/neurolink batch prompts.txt --timeout 45s --output results.json
// SvelteKit API route with timeout handling
export const POST: RequestHandler = async ({ request }) => {
const { message } = await request.json();
const provider = createBestAIProvider();
try {
// NEW: Primary streaming method (recommended)
const result = await provider.stream({
input: { text: message },
timeout: "2m", // 2 minutes for streaming
});
// Process stream
for await (const chunk of result.stream) {
// Handle streaming content
console.log(chunk.content);
}
// LEGACY: Backward compatibility (still works)
const legacyResult = await provider.stream({ input: { text:
prompt: message,
timeout: "2m", // 2 minutes for streaming
});
return new Response(result.toReadableStream());
} catch (error) {
if (error.name === "TimeoutError") {
return new Response("Request timed out", { status: 408 });
}
throw error;
}
};
// Next.js API route with timeout
export async function POST(request: NextRequest) {
const { prompt } = await request.json();
const provider = createBestAIProvider();
const result = await provider.generate({
prompt,
timeout: process.env.AI_TIMEOUT || "30s", // Configurable timeout
});
return NextResponse.json({ text: result.content });
}
No installation required! Experience NeuroLink through comprehensive visual documentation:
cd neurolink-demo && node server.js
# Visit http://localhost:9876 for live demo
- Real AI Integration: All 9 providers functional with live generation
- Complete Use Cases: Business, creative, and developer scenarios
- Performance Metrics: Live provider analytics and response times
- Privacy Options: Test local AI with Ollama
- CLI Help & Commands - Complete command reference
- Provider Status Check - Connectivity verification (now with authentication and model availability checks)
- Text Generation - Real AI content creation
- Business Use Cases - Professional applications
- Developer Tools - Code generation and APIs
- Creative Tools - Content creation
π Complete Visual Documentation - All screenshots and videos
- π§ Provider Setup - Complete environment configuration
- π₯οΈ CLI Guide - All commands and options
- ποΈ SDK Integration - Next.js, SvelteKit, React
- βοΈ Environment Variables - Full configuration guide
- π Factory Pattern Migration - Guide to the new unified provider architecture
- π MCP Foundation - Model Context Protocol architecture
- β‘ Dynamic Models - Self-updating model configurations and cost optimization
- π§ AI Analysis Tools - Usage optimization and benchmarking
- π οΈ AI Workflow Tools - Development lifecycle assistance
- π¬ Visual Demos - Screenshots and videos
- π API Reference - Complete TypeScript API
- π Framework Integration - SvelteKit, Next.js, Express.js
Provider | Models | Auth Method | Free Tier | Tool Support |
---|---|---|---|---|
OpenAI | GPT-4o, GPT-4o-mini | API Key | β | β Full |
Google AI Studio | Gemini 2.5 Flash/Pro | API Key | β | β Full |
Amazon Bedrock | Claude 3.5/3.7 Sonnet | AWS Credentials | β | β Full* |
Google Vertex AI | Gemini 2.5 Flash | Service Account | β | β Full |
Anthropic | Claude 3.5 Sonnet | API Key | β | β Full |
Azure OpenAI | GPT-4, GPT-3.5 | API Key + Endpoint | β | β Full |
Hugging Face π | 100,000+ models | API Key | β | |
Ollama π | Llama 3.2, Gemma, Mistral | None (Local) | β | |
Mistral AI π | Tiny, Small, Medium, Large | API Key | β | β Full |
Tool Support Legend:
- β Full: All tools working correctly
β οΈ Partial: Tools visible but may not execute properly- β Limited: Issues with model or configuration
- * Bedrock requires valid AWS credentials, Ollama requires specific models like gemma3n for tool support
β¨ Auto-Selection: NeuroLink automatically chooses the best available provider based on speed, reliability, and configuration.
- Automatic Failover: Seamless provider switching on failures
- Error Recovery: Comprehensive error handling and logging
- Performance Monitoring: Built-in analytics and metrics
- Type Safety: Full TypeScript support with IntelliSense
- MCP Foundation: Universal AI development platform with 10+ specialized tools
- Analysis Tools: Usage optimization, performance benchmarking, parameter tuning
- Workflow Tools: Test generation, code refactoring, documentation, debugging
- Extensibility: Connect external tools and services via MCP protocol
- π Dynamic Server Management: Programmatically add MCP servers at runtime
Note: External MCP server activation is in development. Currently available:
- β 6 built-in tools working across all providers
- β SDK custom tool registration
- π MCP server discovery (58+ servers found)
- π§ External server activation (one-line fix pending)
Manual MCP configuration (.mcp-config.json
) support coming soon.
We welcome contributions! Please see our Contributing Guidelines for details.
git clone https://github.com/juspay/neurolink
cd neurolink
pnpm install
pnpm setup:complete # One-command setup with all automation
pnpm test:adaptive # Intelligent testing
pnpm build:complete # Full build pipeline
NeuroLink now features enterprise-grade automation with 72+ commands:
# Environment & Setup (2-minute initialization)
pnpm setup:complete # Complete project setup
pnpm env:setup # Safe .env configuration
pnpm env:backup # Environment backup
# Testing & Quality (60-80% faster)
pnpm test:adaptive # Intelligent test selection
pnpm test:providers # AI provider validation
pnpm quality:check # Full quality pipeline
# Documentation & Content
pnpm docs:sync # Cross-file documentation sync
pnpm content:generate # Automated content creation
# Build & Deployment
pnpm build:complete # 7-phase enterprise pipeline
pnpm dev:health # System health monitoring
π Complete Automation Guide - All 72+ commands and automation features
MIT Β© Juspay Technologies
- Vercel AI SDK - Underlying provider implementations
- SvelteKit - Web framework used in this project
- Model Context Protocol - Tool integration standard
Built with β€οΈ by Juspay Technologies