AI Glossary - Everything You Need to Know About Artificial Intelligence
Your complete guide to AI terms, tools, and techniques in Swedish.
A
AGI (Artificial General Intelligence)
Artificial general intelligence – a hypothetical AI system that can perform any intellectual task at the same level as a human. Unlike today's narrow AI systems (which only do one thing well), AGI could learn and adapt to entirely new situations. We are not there yet.
AI Agent
An AI system that can independently perform tasks for you without constant guidance. Think of a digital assistant that can schedule meetings, research information, write reports, and make decisions based on your goals. Examples: AutoGPT, AgentGPT.
→ Explore AI agents
API (Application Programming Interface)
A technical link that allows different programs to "talk" to each other. When you use ChatGPT via an app instead of the website, the app uses OpenAI's API. For developers who want to integrate AI into their own services.
Attention Mechanism
The technical core of modern language models that allows AI to "focus" on relevant parts of the text. This is why ChatGPT understands that "it" in the sentence "The car crashed into the tree, it was completely wrecked" refers to the car, not the tree.
B
Bias
When AI systems reflect or amplify human biases from training data. Example: A recruitment AI that favors male candidates because it was trained on historical data where mostly men were hired. Critical to understand and counteract.
BERT (Bidirectional Encoder Representations from Transformers)
Google's language model that revolutionized how AI understands context. Unlike older models, BERT reads sentences in both directions simultaneously, which provides better understanding. Used behind the scenes in Google Search.
C
ChatGPT
OpenAI's popular conversational AI that can write texts, answer questions, code, analyze, and much more. Uses GPT models (Generative Pre-trained Transformer) and has become synonymous with "talking to AI."
→ Test ChatGPT
Claude
Anthropic's AI assistant known for longer conversations, better security, and ability to manage large amounts of text. Often preferred by developers and companies who want more controllable AI.
→ Test Claude
Computer Vision
AI that can "see" and interpret images and video. Used for facial recognition, self-driving cars, medical diagnosis from X-rays, and quality control in factories.
→ See image analysis tools
Context Window
How much text an AI can "remember" at once. GPT-4 can handle ~128,000 tokens (about 300 pages), while older models could only handle a few sentences. Larger context = better understanding of long documents.
D
DALL-E
OpenAI's AI for image generation from text descriptions. Type "a cat in a spacesuit painting watercolor on Mars" and get a photorealistic image in seconds. Latest version (DALL-E 3) is integrated in ChatGPT Plus.
→ See image generation tools
Dataset
The collection of data that AI is trained on. A language model can be trained on billions of web pages, books, and conversations. The quality of the dataset determines how good the AI becomes – "garbage in, garbage out."
Deep Learning
A type of machine learning that uses neural networks with many layers (hence "deep"). This is the technology behind almost all modern AI – from language models to image generation to self-driving cars.
Diffusion Models
The technical method behind modern image generation tools like Midjourney and Stable Diffusion. Works by starting with noise and step by step "clearing" a picture that matches your prompt.
E
Embeddings
A way to convert words and sentences into mathematical representations (vectors) that AI can understand. Words with similar meanings get similar numbers. Therefore, AI understands that "dog" and "puppy" are related.
Emergent Behavior
When AI systems suddenly show abilities they weren't specifically trained for. Example: GPT-3 learned to translate languages even though it wasn't an explicit training goal. Both fascinating and somewhat unpredictable.
F
Few-Shot Learning
When AI can learn new tasks from just a few examples. Give ChatGPT 3 examples of how you want to format texts, and it can continue in the same style. Makes AI much more flexible.
Fine-tuning
Training an existing AI model further on specific data to make it better at something particular. You can fine-tune GPT-4 on your company documents to get an AI that "speaks like you."
→ See tools for fine-tuning
Foundation Model
Large, general AI models (like GPT-4, Claude, Gemini) trained on enormous amounts of data and can be adapted for thousands of different tasks. The foundation on which other AI applications are built.
G
Generative AI
AI that creates new content – text, images, video, music, code. Unlike traditional AI that only analyzes, generative AI can actually produce original material. ChatGPT, Midjourney, and Runway are examples.
→ Explore generative tools
Gemini
Google's advanced AI model (formerly called Bard). Multimodal, which means it can handle text, image, video, and sound simultaneously. Competitor to GPT-4 and Claude.
→ Test Gemini
GPT (Generative Pre-trained Transformer)
OpenAI's family of language models. GPT-4 is the latest version and runs ChatGPT Plus. "Pre-trained" means it is first trained on massive data before it is fine-tuned for specific tasks.
Guardrails
Safety rules and restrictions that prevent AI from generating harmful, illegal, or unethical content. That's why ChatGPT refuses to help with certain things even if it technically could.
H
Hallucination
When AI makes up facts or responds confidently despite information being incorrect. A known problem with language models – they are trained to sound convincing, not necessarily to be correct. Always double-check important information.
I
Image-to-Image
AI that transforms an image into another style or form. Upload a sketch and get a photorealistic image, or convert a summer photo to winter. Used in tools like Stable Diffusion and Midjourney.
→ See image-to-image tools
Inference
When a trained AI model is actually used to make predictions or generate content. Training occurs once (expensive and slow), inference occurs every time you use the tool (fast and cheap).
In-Context Learning
AI's ability to learn new patterns just by seeing examples in the prompt, without needing to be retrained. You can "teach" ChatGPT a new writing style in the middle of a conversation.
J
Jasper AI
A popular AI writing tool specialized in marketing and content production. Creates blog posts, ad texts, email campaigns, and social media posts based on your instructions.
→ Test Jasper AI
K
Knowledge Cutoff
The date when an AI model's training stopped. GPT-4's knowledge ends, for example, in April 2023 – it knows nothing of events after that (if it doesn't have web search). Important to understand when asking about current events.
L
Large Language Model (LLM)
Large language models trained on millions of texts that can understand and generate human language. ChatGPT, Claude, and Gemini are all LLMs. "Large" refers to billions of parameters that control the model's behavior.
LoRA (Low-Rank Adaptation)
An efficient method for fine-tuning large AI models without needing to retrain the entire model. Makes it possible to adapt AI to specific tasks even with limited computing power.
M
Machine Learning
The broader category that AI belongs to – systems that learn from data instead of being explicitly programmed. Deep learning is a subcategory of machine learning.
Midjourney
One of the most popular tools for AI-generated art. Known for its artistic style and ability to create fantastic, dreamlike images. Works via Discord.
→ Test Midjourney
Multimodal AI
AI that can handle multiple types of input at the same time – text, image, sound, video. GPT-4V (vision) and Gemini are multimodal. The AI of the future will be even more multimodal.
N
Natural Language Processing (NLP)
The technical field that focuses on getting computers to understand, interpret, and generate human language. Used for translation, sentiment analysis, chatbots, and language models.
Neural Network
A mathematical model inspired by how the brain works – interconnected "neurons" that together can learn complex patterns. The foundation for deep learning and modern AI.
Notion AI
AI features built into the productivity tool Notion. Can write, summarize, translate, and reorganize your documents directly in your workspace.
→ Test Notion AI
O
Open Source AI
AI models whose code and sometimes weights are freely available. Allows developers to use, modify, and improve the models. Examples: Llama (Meta), Stable Diffusion, Mistral.
→ Explore open source models
OpenAI
The company behind ChatGPT, GPT-4, DALL-E, and Whisper. Founded in 2015 with the goal of developing safe AGI. Started as non-profit but is now "capped-profit" after Microsoft's investment.
Overfitting
When an AI model becomes so good on its training data that it no longer can generalize to new data. Like studying old exams without understanding the concepts – you won't handle new questions.
P
Parameters
The adjustable "knobs" that control an AI model's behavior. GPT-4 has over 1 trillion parameters. More parameters = more complex reasoning, but also more expensive to train and use.
Perplexity
An AI search engine that combines language models with real-time search. Instead of a list of links, you get a cohesive answer with sources. Competitor to Google Search.
→ Test Perplexity
Prompt
The instruction or question you give to an AI. A good prompt is clear, specific, and provides context. "Write a text" is a poor prompt. "Write a 300-word blog post about AI for small businesses, professional tone, include 3 concrete examples" is a good prompt.
Prompt Engineering
The art of formulating prompts to get the best possible results from AI systems. A distinct skill that is becoming increasingly important. Includes techniques like few-shot learning, chain-of-thought, and role prompting.
→ Learn prompt engineering
R
RAG (Retrieval-Augmented Generation)
A technique that allows AI to retrieve relevant information from external sources before responding. Makes AI more factually accurate and up-to-date. Used when you want AI to respond based on your own documents.
Reinforcement Learning from Human Feedback (RLHF)
The training method that makes ChatGPT "helpful and safe." People evaluate AI's responses, and the model learns which behaviors are desirable. That's why ChatGPT feels more like an assistant than a search engine.
Runway
A powerful AI platform for video editing and generation. Can remove backgrounds, generate video from text, and create effects that previously required a Hollywood budget.
→ Test Runway
S
Semantic Search
Search based on meaning rather than exact words. Traditional search matches keywords, semantic search understands context. Therefore, AI search can find the right answer even if you don't use precisely the correct terms.
Sentiment Analysis
AI that analyzes emotions in text – positive, negative, or neutral. Used to monitor brands on social media, analyze customer reviews, or understand market trends.
Stable Diffusion
An open source model for image generation. Unlike DALL-E and Midjourney, you can run Stable Diffusion on your own computer (if you have a good graphics card). Popular among developers and tinkerers.
Synthetic Data
AI-generated data used to train other AI systems. When real data is limited or sensitive (medical records, personal data), synthetic data can fill the gap.
T
Temperature
A parameter that controls how "creative" or "random" an AI model is. Low temperature (0.1-0.3) = predictable, consistent answers. High temperature (0.8-1.0) = creative, varied answers. Adjust according to needs.
Token
The basic unit that AI reads and generates text in. Approximately 1 token = ¾ words. "Hello there" is ~4 tokens. Important for understanding costs (paid per token) and limitations (max tokens per request).
Training Data
All data used to train an AI model. For language models, it is books, web pages, articles, code. The quality and diversity of the training data determine how good the model becomes.
Transfer Learning
Taking an AI model trained on one task and using it for something else. For example, an image classification model trained on animals can be easily adapted to recognize cars. Saves a lot of time and resources.
Transformer
The revolutionary AI architecture from 2017 that underlies all modern language models. The name comes from the research paper "Attention Is All You Need." GPT stands for "Generative Pre-trained Transformer."
U
Unsupervised Learning
When AI learns to find patterns in data without being given examples of "correct answers." Unlike supervised learning (where each data point is labeled), the model must figure out the structure on its own. Used for clustering and anomaly detection.
V
Vector Database
A specialized database for storing and searching embeddings (vector representations of text, images, etc.). Critical component in RAG systems and semantic search. Examples: Pinecone, Weaviate.
Vision Transformer
An AI architecture that applies transformer technology (from language models) to images. Has revolutionized computer vision in the same way transformers revolutionized NLP.
W
Whisper
OpenAI's AI for speech-to-text (speech recognition). Can transcribe and translate speech in 99+ languages with astonishing accuracy. Open source and free to use.
Z
Zero-Shot Learning
When AI can perform tasks it has never seen examples of before, just based on instructions. "Translate this text into Swahili" works even if the model was never specifically trained on Swahili translation. Shows true "intelligence."
Get started with AI
Now that you understand the terminology – explore our 250+ reviewed AI tools and find the ones that suit you best.
→ Explore the AI catalog
→ See the most popular tools
The glossary is continuously updated with new terms. Missing something? Contact us.
Written by: aival.se
Read more:




