Categories
AI

LLM and AI Coding Tools

AI and LLM Tool Ecosystem

A curated list and explanation of key elements in the AI and LLM ecosystem, including autonomous agents, coding assistants, frameworks, open-weight models, and self-hosted AI.

1. Autonomous / Agentic AI Platforms

Platforms that allow creation, orchestration, and deployment of AI agents that act autonomously or follow workflows:

  • CrewAI – Framework for orchestrating role-playing and autonomous AI agents
  • Autogen – Python framework for agentic AI
  • dify – Production-ready platform for agentic workflow development
  • AutoGPT – Create, deploy, and manage continuous AI agents for complex workflows
  • LangChain – Framework to convert LLMs into usable systems, orchestrate reasoning, tools, and action proposals
  • Dify – Open source self-hosted GUI based workflow / agent development

2. IDE / Coding Assistants

2.1 VSCode Plugins

  • Continue – Build and run custom agents across IDE, terminal, and CI

2.2 CLI Agents

Open-Source

  • Opencode – Terminal-based AI coding agent that can use any AI model
  • Aider – Terminal coding assistant
  • Agent CLI – Local AI-powered command-line suite
  • CLI Engineer – Self-hosted AI CLI tools

Proprietary / Requires Login

Note: Requires account login or subscription

3. Web Interfaces / UIs

  • OpenWeb-UI – User-friendly web interface for interacting with AI models

4. Copilot & Coding Tips

  • Awesome Copilot – Curated tips, tricks, and resources for GitHub Copilot

5. Frameworks and Tools

  • GPTCache – Caching layer for LLMs; speeds up requests using semantic caching

6. Open-Weight Models

Open-weight models have publicly available weights that allow local inference. Full training code and data may not be open. These models can be run as local LLMs, sometimes with cloud offload options.

Examples of Open-Weight Models:

  • Qwen 3
  • gpt-oss-20b
  • DeepSeek r1
  • GLM 4.5
  • Gemma (Google DeepMind)

Note: Full-sized open-weight models are ~6 months behind top closed models in intelligence, but still slower at the same scale.
Source: blog.brokk.ai

6.1 Local LLM Runners / Inference Runtimes

Tools for running open-weight models locally or in hybrid mode:

Many of these tools can also offload inference to cloud models when local hardware cannot handle large weights.

7. Paid / Hosted Services

AI Routing & Gateways

  • openrouter.ai – Unified gateway for AI services; allows using different LLMs without integrating each one manually

Free Tier:

  • ~50 requests per model marked as :free daily

8. Self-Hosted / Local-First AI

Platforms for running AI locally, as a drop-in replacement for hosted APIs:

  • LocalAI – Open-source alternative to OpenAI, Claude, etc.; local-first and consumer-grade hardware compatible
  • Agent CLI – Local CLI tools (also self-hosted)
  • CLI Engineer – Self-hosted AI command-line suite