10 AI terms you should know about in 2024
The lexicon of AI is evolving rapidly
Since its explosion into mainstream awareness in late 2022, generative AI (GenAI) has transformed our understanding of technology, enabling more accessible interactions with computers through natural language.
A growing number of people around the globe, including the Middle East, now tend to grasp concepts like ‘prompts’ and ‘machine learning’ in casual conversation. Still, the lexicon of AI continues to evolve rapidly.
Do you know your SLMs from your LLMs? Or the significance of GPT in ChatGPT? Have you encountered RAG in the context of fabrications?
This article delves into 10 next-level AI terms to bring you up to speed.
1. Frontier models
Frontier models represent the cutting edge of AI technology, boasting expansive capabilities across diverse tasks. As pioneers in AI development collaborate through forums like Microsoft’s Frontier Model Forum, they set benchmarks for safety and innovation, ensuring responsible AI deployment.
2. GPU
Graphics Processing Units (GPUs) play a pivotal role in AI’s computational prowess, facilitating parallel processing to handle vast datasets and complex calculations. Essential for both training and inference tasks, GPUs are integral components in the infrastructure supporting today’s most sophisticated AI models.
As GenAI continues its transformative journey, understanding these foundational terms becomes increasingly vital. Whether navigating SLMs or harnessing RAG’s potential, staying informed ensures optimal utilisation of AI’s capabilities in diverse applications.
3. Grounding
Generative AI excels at crafting narratives and responding to queries yet faces challenges distinguishing fact from fiction. This dilemma, hallucination, arises from outdated or erroneous training data. Grounding addresses this issue by anchoring AI models with current, real-world data examples, enhancing output accuracy and relevance.
4. Memory
Although AI lacks accurate memory, its orchestration can simulate memory through structured interactions. This temporarily stores contextually relevant information, enhancing responsiveness and user engagement. Developers are exploring these capabilities to determine optimal memory durations, facilitating dynamic adjustments based on application needs.
5. Orchestration
The orchestration layer within AI serves as a conductor, guiding the sequence of tasks to optimise responses. This functionality ensures coherence in interactions, akin to storing chat histories to refine subsequent responses or integrating fresh data via RAG to enrich context and accuracy.
6. RAG
Retrieval Augmented Generation (RAG) enriches AI capabilities by incorporating external knowledge sources without necessitating retraining. Analogous to Sherlock Holmes consulting ancient scrolls for clues, RAG enables AI systems to access supplementary information. For instance, in retail, RAG could empower a chatbot to draw answers directly from a product catalogue, providing tailored customer support.
7. Reasoning/planning
AI-powered computers now excel at solving problems and completing tasks by analysing patterns derived from historical data, a process akin to human reasoning. Advanced systems push boundaries by understanding complex issues and strategising and sequencing actions to achieve specific objectives.
Imagine utilising an AI program to plan a trip to a theme park. The system can take your objective—to experience six different rides, ensuring the water adventure occurs during peak heat—and systematically organise your itinerary. It efficiently avoids redundant routes and schedules your visit to the splash coaster between noon and 3 p.m., leveraging sophisticated reasoning capabilities.
8. SLMs and LLMs
Small Language Models (SLMs) represent compact versions of their larger counterparts (LLMs), employing machine learning to generate natural language responses. While LLMs demand substantial computational resources for their vast datasets, SLMs like Phi-3 operate efficiently on smaller, curated datasets, even offline. They cater perfectly to applications on devices like laptops and phones, which is ideal for answering basic queries without extensive computational support.
9. Transformers and diffusion models
Transformer models have significantly advanced AI’s language understanding and generation capabilities by contextualising inputs effectively. Meanwhile, diffusion models typically applied in image creation refine outputs through iterative pixel adjustments, ensuring fidelity to prompts.
10. Training and inference
Creating and deploying an AI system involves two essential phases: training and inference. Training is the system’s educational phase, where the system learns from a dataset to perform tasks or make predictions. For instance, given a dataset containing recent home sale prices and variables like bedrooms, bathrooms, and other pertinent factors, the AI adjusts its internal parameters. These parameters dictate the importance or weight assigned to each variable in predicting home prices.
Once trained, the AI transitions to the inference phase. Here, it applies the learned patterns and optimised parameters to generate predictions. For example, when presented with details of a new home about to enter the market, the AI utilises its trained knowledge to forecast its selling price. This process leverages the system’s acquired intelligence to provide accurate and insightful predictions based on real-world data.
Featured image: Do you know your SLMs from your LLMs? Credit: Arnold Pinto