A metric used to measure how often an AI model’s predictions are correct, though it’s often used alongside other metrics like “Precision” and “Recall.”
A mathematical formula in a neural network that determines whether a specific neuron should be “fired” or activated based on its input.
A technique focused on creating “adversarial examples” slightly modifies inputs that cause an AI model to make a mistake or hallucinate.
An autonomous or semi-autonomous entity that perceives its environment and takes actions to achieve specific goals, often used in Reinforcement Learning.
The challenge of ensuring that an AI’s goals and behaviors are consistent with human values and intentions.
A set of step-by-step instructions or rules followed by a computer to complete a task or solve a problem.
The process of identifying data points or patterns that differ significantly from the norm is often used for fraud detection or server monitoring.
A set of protocols and tools that allow different software applications to communicate with each other, such as connecting a custom website to a model like Gemini.
The overarching field of creating machines or software capable of simulating human intelligence, such as reasoning, learning, and problem-solving.
A theoretical form of AI that possesses the ability to understand, learn, and apply knowledge across any intellectual task at a human level or higher.
Also known as “Weak AI,” this refers to AI systems designed and trained for a specific task (e.g., facial recognition or playing chess).
A model architecture where the encoder and decoder (or different parts of the network) have different sizes or structures, often to optimize for speed or specific tasks.
A component of neural network architectures (like Transformers) that allows the model to focus on specific, relevant parts of the input data when making a prediction.
A technique where new training data is created by slightly modifying existing data (e.g., flipping or rotating images) to help a model generalize better.
A type of neural network trained to compress data (the “encoder”) and then reconstruct it (the “decoder”), often used for noise reduction or data compression.
Short for “backward propagation of errors.”