AI Glossary

191
14/04/2026

A

Accuracy

A metric used to measure how often an AI model’s predictions are correct, though it’s often used alongside other metrics like “Precision” and “Recall.”

Activation Function

A mathematical formula in a neural network that determines whether a specific neuron should be “fired” or activated based on its input.

Adversarial Machine Learning

A technique focused on creating “adversarial examples” slightly modifies inputs that cause an AI model to make a mistake or hallucinate.

Agent

An autonomous or semi-autonomous entity that perceives its environment and takes actions to achieve specific goals, often used in Reinforcement Learning.

Alignment

The challenge of ensuring that an AI’s goals and behaviors are consistent with human values and intentions.

Algorithm

A set of step-by-step instructions or rules followed by a computer to complete a task or solve a problem.

Anomaly Detection

The process of identifying data points or patterns that differ significantly from the norm is often used for fraud detection or server monitoring.

API (Application Programming Interface)

A set of protocols and tools that allow different software applications to communicate with each other, such as connecting a custom website to a model like Gemini.

Artificial Intelligence (AI)

The overarching field of creating machines or software capable of simulating human intelligence, such as reasoning, learning, and problem-solving.

Artificial General Intelligence (AGI)

A theoretical form of AI that possesses the ability to understand, learn, and apply knowledge across any intellectual task at a human level or higher.

Artificial Narrow Intelligence (ANI)

Also known as “Weak AI,” this refers to AI systems designed and trained for a specific task (e.g., facial recognition or playing chess).

Asymmetric Model

A model architecture where the encoder and decoder (or different parts of the network) have different sizes or structures, often to optimize for speed or specific tasks.

Attention Mechanism

A component of neural network architectures (like Transformers) that allows the model to focus on specific, relevant parts of the input data when making a prediction.

Augmentation (Data Augmentation)

A technique where new training data is created by slightly modifying existing data (e.g., flipping or rotating images) to help a model generalize better.

Autoencoder

A type of neural network trained to compress data (the “encoder”) and then reconstruct it (the “decoder”), often used for noise reduction or data compression.

B

Backpropagation

Short for “backward propagation of errors.” It is the central algorithm used to train neural networks. It works by calculating the error at the output and distributing it back through the network’s layers to adjust the weights and minimize mistakes.

Advertisement
Continue Reading Below
Related SEO Topics