Skip to content

simondilhas/Pragmatic_BIM_Requirements_Manager

Repository files navigation

Pragmatic BIM Requirements Manager:

Streamlining BIM Data Management

Table of Contents

Introduction

The Challenge

Building Information Modeling (BIM) is crucial in modern design and construction, yet managing BIM requirements and benefiting from BIM Data remains complex:

  1. Client BIM requirements are dispersed across multiple, disconnected documents.
  2. Requirement inconsistencies impede the development of automated BIM data workflows.
  3. Companies resort to custom solutions (often Excel-based), but lack resources for optimal implementation.

Current solutions like BuildingSMART's data dictionary and Information Delivery Specification (IDS) primarily address metadata requirements. Plannerly, while comprehensive, focuses more on project execution than data handover and requirement formulation.

Our Solution

Introducing our modern, lightweight web application built with FastAPI and HTMX for efficient and pragmatic BIM requirements management:

Key Features

  1. Modern Web Architecture: Built with FastAPI backend and HTMX frontend for fast, responsive user experience without complex JavaScript frameworks.
  2. Freedom and Flexibility through Open Source Licensing: Adapt, modify, and extend the tool to meet your specific workflow needs.
  3. Customization and Extension: Tailor the tool to your project's / organizations unique needs.
  4. Multi-view Communication: Deliver requirements in stakeholder-specific formats:
    • Interactive web interface with real-time updates for Modelers
    • Excel contract documents for Project Managers
    • IDS or BIMCollab ZOOM Smart Views for BIM Managers
  5. Backend Flexibility: Work with familiar databases (Excel, Airtable, MS Access, etc.) for requirement production.
  6. Progressive Enhancement: HTMX provides dynamic interactions while maintaining server-side rendering for better SEO and accessibility.
  7. Flexible Licensing:
    • Self-hosting for complete control
    • Full-service option for managed solutions

Simplify your BIM data management and maximize your project's potential with our versatile, user-friendly solution!

Use Cases: How This Tool Empowers Client Organizations

This tool is primarily designed to support government and large client organizations in defining and communicating their BIM data requirements effectively. With over five years of experience working with diverse database solutions, we have crafted a tool that addresses the specific needs of our clients. Here’s how it is being utilized:

  1. Custom Requirement Definition: Clients can tailor BIM requirements to align with their unique data needs, ensuring that the information collected is relevant and actionable for their specific projects.

  2. Streamlined Data Documentation: The tool facilitates the documentation of data post-processing steps, ensuring a smooth and efficient handover process within their systems, reducing the risk of data loss or misinterpretation.

  3. Effective Communication of Data Needs: By using this tool, clients can clearly communicate their data requirements to planners and contractors, ensuring that everyone involved in the project is on the same page and working towards the same data goals.

This structured approach not only improves efficiency but also enhances the alignment of BIM data with the strategic objectives of the organization, leading to better project outcomes.

Project Overview

This project consists these main components:

1. A Viewer for Modeling Guidelines

A user-friendly viewer that allows stakeholders to easily communicate, access, navigate, and understand the BIM modeling guidelines. This viewer presents the guidelines in a clear and structured manner, ensuring that they are easy to follow and apply consistently across organizations and projects.

To create your own instance clone the Github project and adapt to your specific needs.

2. A Structured Approach for Defining Requirements

BIM requirements often become disconnected from business value, reducing them to mere wish lists. To address this, we’ve designed the project around the principle that every requirement must have a clear purpose, an accountable owner, and actionable workflows.

Our approach begins with:

  • Defining the Purpose and Workflow: Start by identifying the purpose and workflow that the requirement supports.
  • Assigning Responsibility: Specify who should deliver the information and in which model or file.
  • Detailing Requirements: Outline what needs to be modeled and at what quality to achieve the desired outcome.
  • Specifying Attributes: Identify the necessary attributes to achieve the business value.

This structured approach is reflected in the database design with core tables for:

   +-----------------+
   |    Workflows    |<---
   +-----------------+   |
                         |
   +-----------------+   |
   |      Models     |<--|
   +-----------------+   |
                         |
   +-----------------+   |
   |     Elements    |<--|
   +-----------------+   |
                         |  
   +-----------------+   |   +-----------------+
   |   Attributes    |---|-->|     Mapping*    |
   +-----------------+       +-----------------+

To get started, create your own database in the tool of your choice—whether it's Excel, Airtable, or another solution. *Mapping is not necessary for the definition process

Technology Stack

This application is built with modern web technologies for optimal performance and maintainability:

Backend

  • FastAPI: Modern, fast web framework for building APIs with automatic OpenAPI documentation
  • Python 3.8+: Core programming language with extensive data processing libraries
  • Pydantic: Data validation and settings management using Python type annotations
  • Jinja2: Server-side templating engine for dynamic HTML generation

Frontend

  • HTMX: Modern library for dynamic web interactions without complex JavaScript
  • Alpine.js: Minimal JavaScript framework for reactive UI components
  • CSS3: Modern styling with CSS custom properties and responsive design
  • HTML5: Semantic markup with progressive enhancement

Data Processing

  • Pandas: Powerful data manipulation and analysis library
  • NumPy: Numerical computing foundation
  • OpenPyXL/XlsxWriter: Excel file generation and manipulation
  • BeautifulSoup4: HTML parsing and processing

BIM Integration

  • ifctester: IFC file validation and testing
  • python-slugify: URL-friendly string generation
  • WeasyPrint: PDF generation from HTML/CSS

Cloud Integration

  • Azure Services: Cloud storage and authentication integration
  • Azure Blob Storage: File storage and version management

Development & Deployment

  • Uvicorn: ASGI server for FastAPI applications
  • Docker: Containerization support
  • Pytest: Testing framework with async support

This technology stack provides a robust foundation for handling complex BIM data while maintaining excellent performance and user experience.

Usage Instructions

Set up the database in the tool of your choice.

Options include:

  • Airtable (recommended)
  • Google sheets (second choice as it allows multiselect picklists)
  • Excel
  • Other database management systems

Template Available: You can use our Google Sheets template as a starting point. This template includes all the required columns and structure for the database tables (Workflows, Models, Elements, Attributes, and Mapping).

Populate the database with a Master Version

A Master Version is the template to create project specific versions with more or less data. Recommended Naming Schema:

  • Master Template Version e.g. V0.9
  • Project Versions e.g. V0.9-P-{YOUR PROJECT NUMBER} e.g. V0.9-P-14414
  1. Define the workflow: Outline the purpose and intended use of your data. And define which files are necessary for these workflows (Step 2)

  2. Create the container - the file - the IFC Model you expect your data in

  3. Add necessary logical elements (e.g., walls, floors, rooms). Keep in mind that the IFC Entity is not equal to an element. Think along the lines of what you have to define in a modeling guideline.

  4. Define necessary attributes: List required attributes for each logical element (at least one per element, otherwise the code won't process them at the moment)

    IMPORTANT When defining the Attributes it's not allowed to assign one Attribute to several Elements and Models. This will lead to unpredictable behavior.

    These patterns are fine:

    1. One Attribute - One Element - One Model
    2. One Attribute - One Element - Many Models
    3. One Attribute - Many Elements - One Model

    This Pattern is NOT:

    • One Attribute - Many Elements - Many Models (Leads to all Elements asigned to all Models)
  5. Connect tables: Link all tables bottom-up, starting from the attribute level

  6. Specify data usage (if needed): Detail how to use the data in the mapping table

Create a New Master Version

  • Option 1: quick and dirty in the code:

    • Export the following CSV files from your database, ensuring that all necessary columns are included (Refer to the "Attribute Table" section for details on the required column.):

      • M_Attributes.csv
      • M_Elements.csv
      • M_Models.csv
      • M_Workflows.csv
    • In the data\ directory, create a new folder named after the version you're working on (e.g., data\V2.05).

    • Move the exported CSV files into the newly created version folder.

    • Execute the script located at src/batch_processing_import.py. This will generate a merged Excel file containing all the data aswell as different output formats.

  • Option 2: Upload through the frontend

    • A more scaleable solution is to use the adminpage to upload new versions to a blob storage

Manage different versions

We recommend the following workflow to manage different versions:

  1. Create the first version.
  2. Deploy the data first on a staging area, to see how it will look.
  3. Once satisfied, deploy on the productive system.
  4. Freeze the version and copy it.
  5. Continue working on the new version.

Create Project Versions from the Master Version Template

TODO

Installation and Setup

To create a new version of the project, follow these steps:

1. Fork the Repository

Create a fork of the project on GitHub:

Public:

Private: Detailed instructions for creating a private fork:

  • Create a new private repository (e.g., Private_Pragmatic_BIM_Requirements)
  • Go to your terminal and execute the codes (the capital letters need to be replaced with your data)
# Clone the original repository as a bare repository
git clone --bare https://github.com/simondilhas/Pragmatic_BIM_Requirements_Manager.git

# Navigate into the cloned repository
cd Pragmatic_BIM_Requirements_Manager.git

# Mirror-push to your new private repository
git push --mirror https://github.com/YOUR_USERNAME/YOUR_REPOSITORY_NAME.git

# Remove the temporary local repository
cd ..
mac:
rm -rf Pragmatic_BIM_Requirements_Manager.git
windows:
Remove-Item -Recurse -Force Pragmatic_BIM_Requirements_Manager.git

# Clone your new private repository
git clone https://github.com/YOUR_USERNAME/YOUR_REPOSITORY_NAME.git

# Navigate into your new repository
cd YOUR_REPOSITORY_NAME

# Add the original repository as a remote to fetch updates
git remote add upstream https://github.com/simondilhas/Pragmatic_BIM_Requirements_Manager.git

2. Clone and Run Locally

# Install dependencies
pip install -r requirements.txt

# Run the FastAPI application
python main.py

The application will be available at http://localhost:8000

For development with auto-reload:

uvicorn main:app --reload --host 0.0.0.0 --port 8000

3. Configure the Application

Environment Configuration

  1. Copy the environment template:

    cp .env.example .env
  2. Edit .env file with your configuration:

    # Required: API Authentication
    API_KEY=your_secure_api_key_here
    GITHUB_TOKEN=your_github_personal_access_token
    
    # Required: Azure Storage Configuration
    AZURE_STORAGE_CONNECTION_STRING=DefaultEndpointsProtocol=https;AccountName=your_storage_account;AccountKey=your_storage_key;EndpointSuffix=core.windows.net
    AZURE_STORAGE_CONTAINER_NAME_PROJECTS=your_projects_container
    AZURE_STORAGE_CONTAINER_NAME_GENERAL=your_general_container
    
    # Required: Application Settings
    PYTHONPATH=/app
    
    # Optional: Advanced Configuration
    SECRET_KEY=your_secret_key_here
    ADMIN_PASSWORD=your_secure_admin_password_here
    DATABASE_URL=your_database_url_here
  3. Generate required secrets:

    # Generate API key
    import secrets
    print("API_KEY=" + secrets.token_urlsafe(32))
    
    # Generate secret key
    print("SECRET_KEY=" + secrets.token_urlsafe(32))
    
    # Generate admin password (use a strong password)
    print("ADMIN_PASSWORD=your_strong_admin_password_here")
  4. Get Azure Storage connection string:

    # Option 1: From Azure Portal
    # Go to Storage Account > Access Keys > Connection string
    
    # Option 2: From Azure CLI
    az storage account show-connection-string --name your_storage_account --resource-group your_resource_group

Security Configuration

The application includes built-in security features:

  • API Key Authentication: Secure API endpoints with key-based authentication
  • Admin Password Protection: Hashed password storage for admin access
  • CORS Configuration: Configurable cross-origin resource sharing
  • Environment-based Secrets: Secure configuration management

CORS Configuration (Important for Production):

The application allows requests from any origin by default (CORS_ORIGINS=["*"]). For production environments, you should restrict this to your specific domains by setting the CORS_ORIGINS environment variable:

# In your .env file, set specific domains:
CORS_ORIGINS=["https://yourdomain.com", "https://www.yourdomain.com", "https://staging.yourdomain.com"]

Advanced CORS Configuration: You can also configure other CORS settings:

CORS_CREDENTIALS=true
CORS_METHODS=["GET", "POST", "PUT", "DELETE", "OPTIONS"]
CORS_HEADERS=["Authorization", "Content-Type", "File-Name"]

Security Notice:

For production environments, consider implementing:

  • Azure Active Directory (Azure AD) integration
  • Two-factor authentication (2FA)
  • HTTPS enforcement
  • Regular security updates
  • Restrict CORS origins to your specific domains

4. Deployment Options

Production Deployment

Azure Container Apps (Recommended)

Modern, serverless container hosting with automatic scaling and HTTPS:

Prerequisites:

  • Azure CLI installed and logged in
  • Docker installed locally
  • Azure Container Registry (ACR) created

Quick Deploy:

# 1. Create Azure resources
./deploy-azure-container-apps.sh

# 2. Your app will be available at:
# https://bim-requirements-manager.{environment}.azurecontainerapps.io

Manual Deploy:

# 1. Create resource group
az group create --name your-resource-group --location "your-location"

# 2. Create Azure Container Registry
az acr create --resource-group your-resource-group --name your-registry --sku Basic

# 3. Create Container Apps environment
az containerapp env create --name your-environment --resource-group your-resource-group --location "your-location"

# 4. Deploy the application
az containerapp create \
  --name your-app-name \
  --resource-group your-resource-group \
  --environment your-environment \
  --image your-registry.azurecr.io/bim-requirements-manager:latest \
  --target-port 8000 \
  --ingress external \
  --registry-server your-registry.azurecr.io \
  --registry-username your-registry \
  --registry-password $(az acr credential show --name your-registry --query passwords[0].value -o tsv) \
  --env-vars API_KEY=your_api_key GITHUB_TOKEN=your_github_token

Features:

  • ✅ Automatic HTTPS with custom domains
  • ✅ Auto-scaling (0 to N replicas)
  • ✅ Built-in monitoring and logging
  • ✅ Cost-effective (pay per use)
  • ✅ Easy updates and rollbacks
Azure App Service (Legacy)

Traditional web app hosting (still supported):

# Use the provided deployment script
./deploy-container.sh

Features:

  • ✅ Always-on hosting
  • ✅ Azure integration
  • ❌ More expensive than Container Apps
  • ❌ Less flexible scaling

Docker Deployment (Local/Development)

# Build the Docker image
docker build -t bim-requirements-manager .

# Run the container
docker run -p 8000:8000 --env-file .env bim-requirements-manager

Managed Solutions

  • Contact Abstract Ltd. for fully managed hosting solutions
  • Custom deployment and maintenance services available

5. Stay Updated

Regularly sync with the original repository:

git checkout main
git fetch upstream
git merge upstream/main
git push origin main

or to do it simpler

  • Go to your new project
  • execute once:
git config --global alias.sync '!git checkout main && git fetch upstream && git merge upstream/main && git push origin main'
  • when you just have to run to fetch the updates and merge with your instance
git sync

Database Configuration / Required Columns Descriptions

You can use any database tool of your choice (e.g., Excel, Airtable, etc.), but ensure it follows this structure. Note: To add more languages, simply append a new column with the appropriate language code, such as AttributeDescription.

Workflows Columns

  • WorkflowID (str, int): A unique identifier for the workflow.
  • WorkflowCode (str, int): A Code to identify the Workflow/Usecase
  • WorkflowName* (str): The name of the workflow in the specified language, e.g., WorkflowNameEN for English.
  • WorkflowDescription* (text): A detailed description of the workflow in the specified language, e.g., WorkflowDescriptionEN for English.
  • ModelForWorkflow* (str): Defines on a high level which models (files) are necessary for a workflow.
  • WorkflowGroup (str): Optional column to group the workflows for easier selection in the Admin area.

Models Columns

  • ModelID (str, int): A unique identifier for the model. e.g. ARC-Model
  • ModelName* (str): The name of the model in the specified language, e.g., ModelNameEN for English.
  • ModelDescription* (text): A detailed description of the model in the specified language, e.g., ModelDescriptionEN for English.
  • FileName* (str): The name of the file associated with the model in the specified language, e.g., FileNameEN for English.
  • SortModels (int, float): A numerical value used to sort or order the models.

Elements Columns

  • ElementID (str): A unique identifier for the element e.g., 123 or a pattern like {ElementName}_{ModelName}, e.g., Space_ARC-Model.
  • ElementName* (str): The name of the element in the specified language, e.g., ElementNameEN for English.
  • SortElement (int, float): A numerical value used to sort or order the elements.
  • IfcEntityIfc2x3Name (str): The name of the IFC (Industry Foundation Classes) entity associated with the element, compliant with IFF2x3 standards.
  • IfcEntityIfc4.0Name (str): The name of the IFC (Industry Foundation Classes) entity associated with the element, compliant with IFC 4.0 standards.
  • IfcEntityIfc4x3Name (str): The name of the IFC (Industry Foundation Classes) entity associated with the element, compliant with IFC4x3 standards.
  • ElementDescription* (text): A detailed description of the element in the specified language, e.g., ElementDescriptionEN for English.

Attributes Columns

  • AttributeID (str, int): A unique identifier for the attribute, e.g., 123 or a pattern like {AttributeName}_{ElementName}, e.g., LongName_space.
  • AttributeName (str): The name or type of the attribute, e.g., Name, LongName, IsExternal.
  • SortAttribute (int, float): A numerical value used to sort or order the attributes.
  • AttributeDescription (text)*: A description of the attribute in the specified language, e.g., AttributeDescription for English.
  • Pset (str): The property set to which the attribute belongs.
  • AllowedValues* (str): Comma-separated list of allowed values in the specified language, e.g., AllowedValuesEN for English.
  • RegexCheck* (str): Regular expression used to validate the attribute in the specified language, e.g., RegexCheckEN for English.
  • DataTyp (IfcDatatyp): The data type of the attribute’s value, e.g., IfcLabel.
  • Unit (str): Unit of measurement for the attribute, if applicable, e.g., sqm.
  • IFC2x3 (bool): Indicates if the attribute is compliant with IFC 2x3 standards.
  • IFC4 (bool): Indicates if the attribute is compliant with IFC 4 standards.
  • IFC4.3 (bool): Indicates if the attribute is compliant with IFC 4.3 standards.
  • Applicability (bool): Indicates if the attribute is used as an IDS Applicability.
  • ElementID (str): Identifier for the related element. Use a comma-separated list to link to multiple elements.
  • ModelID (str): Identifier for the model to which this attribute applies. Use a comma-separated list to link to multiple models.
  • WorkflowID (str): Identifier for the workflow or process associated with this attribute. Use a comma-separated list to link to multiple workflows.

Mapping Columns

  • MappingID (str): A unique identifier for the mapping rule or process associated with this mapping.
  • Description (text): A detailed description of the mapping and its purpose. This text explains the purpose, scope, or other relevant information about the workflow. For example, Mapping of SIA416 classified areas to SAP
  • TargetSystem: Names the target System e.g. SAP S/4HANA
  • IfcAttributIDs (str): A comma-separated list of unique identifiers for the attributes involved in this workflow. Each ID corresponds to an attribute that is part of the mapping logic. For example, Area_Space, SIA_416_Classification.
  • CalculationLogic (text): A textual description or formula that outlines the logic used to calculate or process the attributes within the workflow. This helps the programmer to set up the actual workflow. e.g. Sum of all Area_Space if they are HNF

Contributing

We welcome contributions from everyone—whether you’re a seasoned developer, new to open-source or have domain knowledge. Your input is invaluable in making this project better for everyone.

  1. Code Review and Testing
  • Code Review: Help us improve the quality of our codebase by reviewing pull requests. Look for bugs, suggest optimizations, and ensure consistency with our coding standards.
  • Write Tests: Increase our test coverage by writing unit tests, integration tests, or end-to-end tests. This helps ensure that the code is robust and maintains functionality as the project evolves.
  1. Implement New Features
  • Feature Suggestions: If you have ideas for new features, feel free to suggest them. You can do this by opening an issue on GitHub with a detailed description of the feature and its potential impact.
  • Work on Open Issues: Check out the GitHub Issues to find features or bugs that need attention. Feel free to assign yourself an issue and submit a pull request once you're done.
  1. General Suggestions and Feedback
  • Use the project and provide feedback. Your experiences and insights can help us make the project better for everyone.

If you are not a Programmer:

  1. Improve Documentation and Sample Workflows
  • Documentation: Our documentation is crucial for helping new users and contributors get started. Help us expand or refine it by adding new content, fixing errors, or clarifying existing sections.
  • Create and share sample workflows that can be included in the documentation or as part of the project examples. This aids users in understanding the practical applications of the project.
  1. General Suggestions and Feedback
  • Use the project and provide feedback. Your experiences and insights can help us make the project better for everyone.

FAQ

Common Issues and Solutions

Deployment Issues

Q: Container App fails to start with "exec: startup.sh: executable file not found" A: This is resolved in the new Container Apps deployment. The Dockerfile now runs uvicorn directly instead of using a startup script.

Q: Application shows "Error 403 - This web app is stopped" A: This was an issue with Azure App Service. The new Container Apps deployment resolves this with proper container orchestration.

Q: Environment variables show as "null" in Azure A: This is a display issue in Azure CLI. The variables are actually set correctly. Verify with:

az containerapp show --name your-app-name --resource-group your-resource-group --query "properties.template.containers[0].env"

Q: Container App scales to 0 and takes time to start A: This is normal behavior for cost optimization. To keep it always running:

az containerapp update --name your-app-name --resource-group your-resource-group --min-replicas 1

Local Development Issues

Q: "This site can't be reached" when testing locally A: Use localhost:8000 or 127.0.0.1:8000 instead of 0.0.0.0:8000. Browsers cannot connect to 0.0.0.0 directly.

Q: Port 8000 already in use A: Stop existing containers or use a different port:

docker stop $(docker ps -q --filter "publish=8000")
# or
docker run -p 8001:8000 --env-file .env bim-requirements-manager

Q: FastAPI JSON response instead of HTMX frontend A: Ensure there's no conflicting root endpoint in main.py. The web pages router should handle the root path.

Q: CORS errors when accessing from different domains A: The application allows all origins by default (CORS_ORIGINS=["*"]). For production, restrict CORS to your specific domains by setting the CORS_ORIGINS environment variable in your .env file. See the Security Configuration section for details.

Azure Storage Issues

Q: Storage account not found A: Ensure the storage account exists and the connection string is correct:

az storage account list --query "[].{Name:name, ResourceGroup:resourceGroup}"

Q: Container not found in storage A: Create the required containers:

az storage container create --name your_projects_container --account-name your_storage_account
az storage container create --name your_general_container --account-name your_storage_account

Performance Issues

Q: Slow response times A: Check if the app is scaling properly:

az containerapp show --name your-app-name --resource-group your-resource-group --query "properties.template.scale"

Q: High costs A: Optimize scaling settings:

# Scale to zero when idle (cheapest)
az containerapp update --name your-app-name --resource-group your-resource-group --min-replicas 0 --max-replicas 3

Getting Help

If you encounter issues not covered here:

  1. Check the application logs:

    az containerapp logs show --name your-app-name --resource-group your-resource-group
  2. Verify your environment variables:

    az containerapp show --name your-app-name --resource-group your-resource-group --query "properties.template.containers[0].env"
  3. Test the health endpoint:

    curl https://your-app-url.azurecontainerapps.io/health
  4. Create an issue on GitHub with:

    • Error messages
    • Steps to reproduce
    • Your configuration (without secrets)
    • Log output

Roadmap

  1. MVP Version: Core functionality with FastAPI + HTMX architecture
  2. Admin Interface: Complete admin panel for non-technical users
  3. IDS Creation: Automated IDS file generation
  4. 🔄 Enhanced UI/UX: Improved user interface and user experience
  5. 🔄 Advanced Analytics: Project analytics and reporting features
  6. 📋 API Documentation: Comprehensive API documentation with OpenAPI
  7. 📋 Multi-language Support: Extended internationalization support
  8. 📋 Mobile Optimization: Responsive design improvements for mobile devices

License

This project is licensed under the GNU Lesser General Public License (LGPL).

What This Means for Users:

  • Freedom to Use: You are free to use this software for personal, academic, or commercial purposes without any restrictions.
  • Modification and Distribution: You can modify the source code and distribute your modified versions, provided that you also distribute the modifications under the same LGPL license.
  • Integration with Proprietary Software: Unlike the full GNU General Public License (GPL), the LGPL allows you to link this library with proprietary software without requiring that the proprietary software itself be open-sourced.
  • Contribution Back: If you improve or modify the library, we encourage (but do not require) you to contribute your changes back to the community, so everyone can benefit from your enhancements.

For full details, please see the LICENSE.txt file included in the repository (https://github.com/simondilhas/Pragmatic_BIM_Requirements_Manager).

Acknowledgements

I would like to extend my heartfelt appreciation to everyone who contributed to the success of this project:

  • Open Source Libraries: My sincere thanks go to the developers and contributors of FastAPI, HTMX, Pandas, and the broader Python ecosystem. Your open-source tools have been essential in building this modern web application.

  • Pierre Monico: I am deeply grateful for your continued support and coding advice. Your insights and guidance have been invaluable throughout this process.

  • Requirement Definition Projects: Over the past years, I have had the privilege of simplifying the requirement definition process for various projects. The experience gained from these efforts has been instrumental in setting up this project.

  • Community Contributors: Thank you to all contributors who have helped improve the codebase, documentation, and user experience.

About

The pragmatic way to define and communicate BIM data and geometry requirements.

Topics

Resources

License

Code of conduct

Contributing

Stars

Watchers

Forks

Packages

No packages published

Languages