Skip to content

Home / Glossary

AI Coding Glossary

Key terms in AI-assisted development, explained clearly with practical context.

Agentic Coding

Ang agentic coding ay isang paraan ng software development kung saan ang isang AI agent ay autonomous na nagbabasa ng iyong codebase, nagsusulat ng code, nagpapatakbo ng mga command, at umuulit sa mga resulta nang walang manual na copy-paste. Hindi tulad ng chat-based na AI, ang agent ay direktang kumikilos sa iyong development environment upang makumpleto ang mga multi-step na gawain.

Claude Code

Ang Claude Code ay ang terminal-based na AI coding agent ng Anthropic na nag-ooperate nang direkta sa iyong development environment. Binabasa nito ang iyong buong proyekto, nagsusulat ng code sa maraming file, nagpapatakbo ng mga shell command, namamahala ng mga git workflow, at umuulit sa mga error nang autonomous — lahat mula sa command line.

Model Context Protocol (MCP)

Ang Model Context Protocol (MCP) ay isang bukas na pamantayan na nilikha ng Anthropic na nagbibigay ng unibersal na paraan upang ikonekta ang mga AI model sa mga external na tool, data source, at API. Gumaganap ito bilang isang standardized na interface — tulad ng USB para sa AI — upang ang anumang MCP-compatible na tool ay maaaring gumana sa anumang MCP-compatible na AI agent.

CLAUDE.md

Ang CLAUDE.md ay isang markdown configuration file na inilalagay sa root ng iyong proyekto na nagbibigay sa Claude Code ng mga persistent, partikular sa proyektong tagubilin. Sinasabi nito sa agent ang tungkol sa iyong mga coding convention, arkitektura, mga karaniwang command, at mga panuntunan — gumaganap bilang isang anyo ng pangmatagalang memorya na naaangkop sa bawat sesyon sa loob ng proyektong iyon.

AI Pair Programming

Ang AI pair programming ay isang development workflow kung saan ang isang human developer ay nagtatrabaho kasabay ng isang AI tool upang magsulat ng code nang collaborative sa real-time. Nagbibigay ang developer ng direksyon, context, at pagpapasya habang ang AI ay nag-aambag ng mga mungkahi sa code, humahabol ng mga bug, at humahawak ng mga paulit-ulit na gawain sa pagpapatupad.

Context Window

Ang context window ay ang maximum na bilang ng mga token (mga salita, mga character ng code, at mga simbolo) na maaaring iproseso ng isang AI model sa isang interaksyon. Tinutukoy nito ang itaas na limitasyon ng dami ng impormasyon — kasama ang iyong prompt, code, at tugon ng modelo — na maaaring hawakan ng AI sa memorya nang sabay.

Coding Agent

Ang coding agent ay isang AI-powered na tool na maaaring autonomous na magbasa ng mga file, sumulat ng code, magsagawa ng mga terminal command, at umuulit sa mga resulta upang makumpleto ang mga programming task. Hindi tulad ng mga passive na tool sa mungkahi ng code, ang isang coding agent ay gumagawa ng mga independent na aksyon sa iyong development environment upang makamit ang isang nabanggit na layunin.

Vibe Coding

Ang vibe coding ay isang impormal na paraan ng software development kung saan ang isang developer ay naglalarawan ng gusto niya sa natural na wika at hinahayaan ang isang AI tool na hawakan ang mga detalye ng pagpapatupad. Sa halip na sumulat ng tumpak na specification, nakikipag-usap ang developer ng layunin sa pamamagitan ng kaswal na pag-uusap at umuulit batay sa mga resulta.

AI Code Review

Ang AI code review ay ang proseso ng paggamit ng artificial intelligence upang awtomatikong suriin ang source code para sa mga bug, kahinaan sa seguridad, mga inconsistency sa istilo, at mga isyu sa kalidad. Ang mga AI reviewer ay maaaring suriin ang mga pull request, magmungkahi ng mga pagpapabuti, at mahuli ang mga problemang maaaring mapalampas ng mga human reviewer dahil sa pagod o presyon ng oras.

Prompt Engineering para sa Code

Ang prompt engineering para sa code ay ang kasanayan ng pagbuo ng malinaw, tumpak na mga tagubilin na tumutulong sa mga AI coding tool na makagawa ng tumpak, may kaugnayan na output. Kinabibilangan nito ang pagbubuo ng iyong mga kahilingan na may tamang antas ng context, mga limitasyon, at mga halimbawa upang maunawaan ng AI ang parehong gusto mo at kung paano mo gustong gawin ito.

Headless AI Agent

Ang headless AI agent ay isang coding agent na tumatakbo nang walang isang human-facing na interface o real-time na interaksyon. Nagsasagawa ito ng mga gawain nang autonomous sa mga background na proseso, mga CI/CD pipeline, o mga naka-iskedyul na trabaho — nagbabasa ng code, gumagawa ng mga pagbabago, nagpapatakbo ng mga test, at nag-uulat ng mga resulta nang hindi naghihintay ng input ng tao sa anumang hakbang.

Mga Sub-Agent

Ang mga sub-agent ay mga parallel na child process na ipinanganak ng isang pangunahing AI coding agent upang hawakan ang mga independent na bahagi ng isang kumplikadong gawain nang sabay-sabay. Sa halip na iproseso ang lahat nang sequential, ang pangunahing agent ay nagde-delegate ng mga sub-task sa mga espesyalisadong child agent na tumatakbo nang parallel at nag-uulat ng mga resulta pabalik sa parent.

AI Code Completion

AI code completion is a feature in development tools that uses machine learning models to predict and suggest code as you type. It ranges from single-line autocomplete to multi-line function generation, analyzing the surrounding code context to offer relevant suggestions that match your intent and coding style.

Large Language Model (LLM)

A large language model (LLM) is a deep learning system with billions of parameters, trained on vast datasets of text and code to understand, generate, and reason about natural language and programming languages. LLMs like Claude, GPT-4, and Gemini are the foundation of modern AI coding tools.

Retrieval-Augmented Generation (RAG)

Retrieval-augmented generation (RAG) is an AI architecture that improves the accuracy of language model responses by retrieving relevant information from external knowledge sources before generating an answer. Instead of relying solely on what the model memorized during training, RAG fetches up-to-date, domain-specific data and includes it in the model's context.

Tool Use

Tool use (also called tool calling) is the capability of a large language model to invoke external functions, APIs, or system commands as part of generating a response. Instead of being limited to producing text, a model with tool use can read files, run code, query databases, and interact with services—making it the foundation of agentic AI systems.

Function Calling

Function calling is an AI model capability where the model generates structured JSON arguments to invoke external functions instead of producing plain text. This enables LLMs to interact with APIs, databases, file systems, and other tools in a reliable, programmatic way—turning a conversational model into one that can take real-world actions.

System Prompt

A system prompt is a set of instructions provided to an AI model before the user's message that defines the model's behavior, persona, constraints, and capabilities. It acts as a configuration layer that shapes every response the model produces, without the user needing to repeat these instructions in each message.

Temperature

Temperature is a parameter in large language models that controls the randomness of the output. A temperature of 0 makes the model deterministic, always choosing the most probable next token. Higher temperatures (up to 1.0 or 2.0) increase randomness, making less probable tokens more likely to be selected. For coding tasks, lower temperatures generally produce more reliable, consistent code.

Token

A token is the fundamental unit of text that a large language model processes. Tokenization splits text into chunks—sometimes whole words, sometimes subwords, sometimes individual characters—that the model can work with. In English text, one token is roughly 3-4 characters or 0.75 words. In code, tokens map to keywords, operators, variable names, and whitespace.

Fine-Tuning

Fine-tuning is the process of further training a pre-trained large language model on a smaller, task-specific dataset to adapt its behavior for a particular use case. The model's weights are updated to specialize in a domain—such as a specific programming language, codebase, or output format—while retaining its general capabilities from pre-training.

Code Generation

AI code generation is the process of using artificial intelligence to produce source code from natural language descriptions, specifications, or existing code context. Modern code generation powered by LLMs can write entire functions, classes, tests, and even full applications from high-level instructions, across virtually any programming language.

AI Refactoring

AI refactoring is the use of artificial intelligence to automatically restructure, simplify, and improve existing source code without changing its external behavior. AI refactoring tools analyze code for complexity, duplication, poor naming, and anti-patterns, then apply transformations that make the code cleaner, more maintainable, and easier to understand.

AI Testing

AI testing is the application of artificial intelligence to software testing workflows—including generating unit tests, integration tests, and end-to-end tests from source code; identifying untested edge cases; analyzing test failures; and suggesting fixes. AI testing tools understand code semantics to write meaningful tests that go beyond basic coverage.

Multi-Modal AI

Multi-modal AI refers to artificial intelligence systems that can process, understand, and generate multiple types of data—text, images, audio, video, and code—within a single model. Unlike single-modal models that only handle text, multi-modal models can analyze a screenshot of a UI, read the associated code, and generate modifications based on both visual and textual understanding.

Chain-of-Thought

Chain-of-thought (CoT) prompting is a technique that encourages a large language model to break down complex problems into intermediate reasoning steps before producing a final answer. Instead of jumping to a conclusion, the model "thinks out loud," explaining each step of its logic. This significantly improves accuracy on tasks that require multi-step reasoning, including debugging, algorithm design, and code architecture decisions.

Few-Shot Prompting

Few-shot prompting is a technique where you include a small number of example input-output pairs in your prompt to demonstrate the pattern you want the AI to follow. By showing the model 2-5 examples of the desired behavior, it learns the format, style, and logic you expect—without any model training or fine-tuning. This is one of the most effective techniques for getting consistent, formatted output from LLMs.

Zero-Shot Prompting

Zero-shot prompting is a technique where you instruct an AI model to perform a task without providing any examples of the desired input-output format. You describe what you want in natural language, and the model relies entirely on its pre-trained knowledge to produce the output. It is the most natural way to interact with AI—just tell it what to do.

Embeddings

Embeddings are dense numerical vectors (arrays of floating-point numbers) that represent text, code, or other data in a high-dimensional space where semantically similar items are positioned close together. They enable AI systems to measure similarity between pieces of code, search codebases by meaning rather than keywords, and power retrieval-augmented generation (RAG) systems.

Vector Database

A vector database is a specialized database designed to store, index, and search high-dimensional embedding vectors efficiently. Unlike traditional databases that match exact values or keywords, vector databases find the most similar vectors to a query vector—enabling semantic search, recommendation systems, and the retrieval component of RAG (retrieval-augmented generation) architectures.

Technical Debt

Technical debt is the implied cost of future rework caused by choosing a quick, expedient solution now instead of a better approach that would take longer. Like financial debt, it accumulates interest: the longer it remains unaddressed, the more time and effort future changes require. Common sources include rushed features, skipped tests, outdated dependencies, and inconsistent architecture.