A collection of specialized tools designed to enhance AI agent capabilities within the Cursor IDE. These tools enable more intelligent code analysis, validation, and generation while preventing common AI pitfalls like method hallucination and redundant implementations.
This repository contains a growing collection of tools that help AI agents (like myself) work more effectively within Cursor. Each tool is designed to solve specific challenges in AI-assisted development:
- Preventing method hallucination
- Validating API usage
- Discovering existing functionality
- Ensuring correct implementation patterns
- Maintaining code quality and consistency
The Method Validator is a specialized tool that helps AI agents analyze Python packages, discover existing methods, and validate APIs before implementing new solutions. This prevents redundant code creation and ensures correct API usage.
Key features:
- Smart package analysis with filtering of standard libraries
- Detailed method discovery and validation
- Intelligent categorization of methods
- Exception pattern analysis
- Optimized caching system
- Machine-readable output for automated processing
Learn more about Method Validator
# Clone the repository
git clone https://github.com/your-org/agent-tools.git
cd agent-tools
# Install in development mode
pip install -e .
These tools are designed to be used seamlessly within the Cursor IDE. When working with an AI agent in Cursor, you can trigger specific tools using designated prompts:
TOOL: method_validator - Implement [task description]
Example:
TOOL: method_validator - Write a function to extract tables from a webpage using Playwright
agent_tools/
โโโ src/
โ โโโ agent_tools/
โ โโโ method_validator/ # Method validation tool
โ โ โโโ analyzer.py # Core analysis logic
โ โ โโโ cache.py # Caching system
โ โ โโโ cli.py # Command-line interface
โ โ โโโ README.md # Tool-specific documentation
โ โโโ [future tools...] # Additional tools will be added here
โโโ tests/ # Test suite
โโโ examples/ # Usage examples
โโโ README.md # This file
-
Prevention Over Correction
- Tools focus on preventing common AI mistakes rather than fixing them after the fact
- Built-in validation and verification at every step
-
Intelligent Caching
- Optimized caching systems to improve response times
- Smart invalidation based on source changes
-
Machine-First Design
- All tools provide machine-readable output
- Structured data formats for easy parsing
- Clear success/failure indicators
-
Progressive Enhancement
- Tools work with basic functionality out of the box
- Advanced features available for more complex use cases
Contributions are welcome! If you have ideas for new tools or improvements to existing ones, please:
- Fork the repository
- Create a feature branch
- Commit your changes
- Open a pull request
Please ensure your contributions maintain or enhance the tools' autonomous operation capabilities.
We plan to add more tools to this repository, including:
- Code Pattern Analyzer
- Dependency Graph Generator
- Test Case Validator
- Documentation Analyzer
- Type Inference Helper
Stay tuned for updates!
We are actively developing additional agent tools for various technologies and platforms:
- ArangoDB Agent Tools: Smart graph database operations, query optimization, and schema validation
- Database Migration Assistant: Intelligent schema evolution and data transformation
- Database Schema Creator for LLM consumption: Intelligent schema evolution and data transformation
- Docker Agent Tools: Container optimization, security scanning, and deployment validation
- Infrastructure as Code Validator: Template verification and best practices enforcement
- GitHub Integration Tools: PR analysis, code review automation, and workflow optimization
- Local LLM Tools: Integration with local language models for privacy-sensitive operations
- CI/CD Pipeline Validator
- Security Compliance Checker
- Performance Optimization Analyzer
- Cross-Platform Compatibility Validator
- API Integration Assistant
- Cloud Resource Optimizer
Each tool will follow our core design principles of prevention over correction, intelligent caching, and machine-first design while providing specific capabilities for its target technology.
While these tools are primarily designed for AI agents, they can also be valuable for human developers:
- Use Method Validator to explore unfamiliar packages
- Leverage automated API discovery
- Benefit from intelligent caching and analysis
However, the primary focus remains on enhancing AI agent capabilities within Cursor.
This project uses pytest for testing. The test suite includes both unit tests and integration tests.
To run the test suite:
pytest tests/
For test coverage report:
pytest --cov=agent_tools tests/
tests/unit/
: Unit tests for individual componentstests/integration/
: Integration tests for end-to-end functionality
When contributing to this project:
- All code changes must include corresponding tests
- Follow the AAA pattern (Arrange-Act-Assert)
- Use pytest fixtures for common setup
- Mock external dependencies appropriately
- Ensure both new and existing tests pass
- Unit tests for all new functions/methods
- Integration tests for feature changes
- Edge case coverage
- Minimum 80% coverage for new code
Tests are automatically run on:
- Every pull request
- Every merge to main branch
- Every release tag
A centralized repository of Cursor MDC rules and design patterns for consistent code generation across projects.
cursor-patterns/
โโโ rules/ # All MDC rules
โ โโโ core/ # Core rules (code advice, design patterns index)
โ โโโ design_patterns/ # Specific design pattern implementations
โ โโโ project_specific/ # Language-specific patterns
โโโ scripts/ # Installation and update scripts
โโโ README.md
- Clone this repository:
git clone https://github.com/yourusername/cursor-patterns
- Run the installation script:
cd cursor-patterns
chmod +x scripts/install.sh
./scripts/install.sh
To update your patterns to the latest version:
chmod +x scripts/update.sh
./scripts/update.sh
This will create a backup of your current rules before updating.
- Fork the repository
- Create a feature branch
- Make your changes
- Submit a pull request
- Create a new
.mdc
file in the appropriate directory - Follow the MDC format:
- Include clear frontmatter (name, version, author, etc.)
- Add specific glob patterns
- Define clear triggers
- Document the pattern thoroughly
- Update the index file if necessary
- Test the pattern in a real project
- Submit a pull request
MIT