ChatGPT Glossary: ​​49 AI terms everyone should know

Photo of author

By [email protected]


Now that I have an iPhone Apple intelligenceArtificial intelligence is making its mainstream stride. ChatGPT, Google Gemini and Microsoft Copilot They are pushing AI into all types of technology, changing the way we interact with technology. Suddenly, people were able to have meaningful conversations with machines, meaning you could ask an AI chatbot questions in natural language and it would respond with new answers, much like a human.

But this side of Chatbots powered by artificial intelligence It is only one part of the AI ​​landscape. Sure, there is Help ChatGPT with your homework Or create Midjourney Great photos of mechanics based on country of origin It’s great, but the potential of generative AI could completely reshape economies. That might be worth it $4.4 trillion to the global economy annuallyaccording to the McKinsey Global Institute, which is why you should expect to hear more and more about artificial intelligence.

ai-atlas-tag.png

They appear in a dizzying array of products – a short list that includes Google’s twinMicrosoft Co-pilotAnthropic Claudethe Confusion Artificial intelligence and gadget research tool from Humanity and rabbit. You can read our reviews and hands-on ratings for these and other products, as well as news, explainers and how-to posts, on our website Atlas Center for Artificial Intelligence.

As people become more accustomed to an interconnected world with artificial intelligence, new terms are popping up everywhere. So, whether you’re trying to look smart over drinks or trying to impress in a job interview, here are some important AI terms you should know.

This glossary is updated regularly.


Artificial general intelligence, or AGI: A concept that proposes a more advanced version of artificial intelligence than we know today, one that can perform tasks much better than humans while also learning and developing its own abilities.

agent: Systems or models that demonstrate the ability to independently follow actions to achieve a goal. In the context of AI, an agent model can behave without constant supervision, like a high-level self-driving car. Unlike a “proxy” framework, which exists in the background, proxy frameworks are in the foreground, focusing on the user experience.

Artificial intelligence ethics: Principles aimed at preventing AI from harming humans, achieved through means such as specifying how AI systems collect data or deal with bias.

AI safety: An interdisciplinary field concerned with the long-term effects of artificial intelligence and how it could suddenly evolve into superintelligence that could be hostile to humans.

Algorithm: A series of instructions that allow a computer program to learn and analyze data in a certain way, such as recognizing patterns, and then learn from them and perform tasks on its own.

coordination: Tweaking AI to better achieve the desired outcome. This can refer to anything from content moderation to maintaining positive interactions with humans.

anthropomorphism: When humans tend to give non-human things human-like properties. In AI, this can include believing that a chatbot is more human and conscious than it actually is, such as believing that it is happy, sad, or even completely sentient.

Artificial intelligence or artificial intelligence: Using technology to simulate human intelligence, whether in computer programs or robots. A field in computer science that aims to build systems that can perform human tasks.

Independent agents: An artificial intelligence model that has the capabilities, programming, and other tools to accomplish a specific task. For example, a self-driving car is an autonomous agent, because it has sensory inputs, GPS, and driving algorithms to navigate the road on its own. Researchers at Stanford University He showed that independent agents could develop their own common cultures, traditions, and language.

prejudice: For large language models, the errors generated by the training data. This can lead to certain characteristics being incorrectly attributed to certain races or groups based on stereotypes.

Chat bot: A program that communicates with humans through texts that mimic human language.

ChatGPT: An artificial intelligence chatbot developed by OpenAI Which uses large language modeling technology.

Cognitive computing: Another term for artificial intelligence.

Data augmentation: Remixing existing data or adding a more diverse set of data to train the AI.

Deep learning: An artificial intelligence method, a subfield of machine learning, that uses multiple parameters to recognize complex patterns in images, audio, and text. This process is inspired by the human brain and uses artificial neural networks to create patterns.

spread: A machine learning method that takes part of existing data, such as an image, and adds random noise. Diffusion models train their networks to re-engineer or restore that image.

Emergent behavior: When an AI model exhibits unintended capabilities.

End-to-end learning or E2E: A deep learning process in which a model is instructed to perform a task from start to finish. It is not trained to complete a task sequentially, but instead learns from the input and solves it all at once.

Ethical considerations: Awareness of the ethical implications of AI and issues related to privacy, data use, fairness, misuse, and other safety issues.

foam: Also known as fast take-off or hard take-off. The concept that if someone builds artificial general intelligence, it may already be too late to save humanity.

Generative Adversarial Networks, or GANs: A generative AI model consisting of two neural networks for generating new data: the generator and the discriminator. The generator creates new content, and the discriminator checks if it is original.

Generative Artificial Intelligence: A content creation technology that uses artificial intelligence to create text, video, computer code, or images. AI is fed large amounts of training data, and finds patterns to generate its own new responses, which can sometimes be similar to the source material.

Google Gemini: Google’s AI chatbot works similarly to ChatGPT but pulls information from the existing web, while ChatGPT is limited to data until 2021 and is offline.

Handrail: Policies and restrictions on AI models to ensure that data is handled responsibly and that the model does not generate spam.

hallucination: Incorrect response from AI. Productive AI can include answers that are incorrect but confidently stated as if they were correct. The reasons for this are not completely known. For example, when asking an AI chatbot, “When did Leonardo da Vinci paint the Mona Lisa?” He – she He may respond with an incorrect statement He said: “Leonardo da Vinci painted the Mona Lisa in 1815,” that is, 300 years after he actually painted it.

Inference: The process that AI models use to generate text, images, and other content related to new data Inference From their training data.

Large Language Model or LLM: An AI model trained on large amounts of text data to understand language and create new content in human-like language.

Machine learning or ML: A component of artificial intelligence that allows computers to learn and achieve better predictive results without requiring explicit programming. It can be combined with training packages to create new content.

Microsoft Bing: A search engine from Microsoft can now use the technology that powers ChatGPT to deliver AI-powered search results. It is similar to Google Gemini in terms of internet connectivity.

Multimedia Artificial Intelligence: A type of artificial intelligence that can process multiple types of input, including text, images, videos, and speech.

Natural language processing: A branch of artificial intelligence that uses machine learning and deep learning to give computers the ability to understand human language, often using learning algorithms, statistical models, and linguistic rules.

Neural network: A computational model that resembles the structure of the human brain and aims to recognize patterns in data. It consists of interconnected nodes, or neurons, that can recognize patterns and learn over time.

Overfitting: A bug in machine learning where it works too closely with training data and may only be able to identify specific examples in said data but not new data.

paper clips: The paperclip maximization theorem, formulated by the philosopher Nick Bostrom from the University of Oxford, is a hypothetical scenario where an AI system will create as many literal paperclips as possible. In its goal of producing as many paper clips as possible, the AI ​​system virtually consumes or transforms all materials to achieve its goal. This could include dismantling other machines to produce more paperclips, machines that could be useful to humans. The unintended consequence of this AI system is that it may destroy humanity in its goal of making paper clips.

border: Numerical values ​​that give the structure and behavior of the LLM, enabling it to make predictions.

Confusion: The name of the AI-powered chatbot and search engine owned by Perplexity AI. It uses a large language model, like those found in other AI chatbots, to answer questions with new answers. Its open Internet connection also allows you to provide up-to-date information and get results from all over the web. Perplexity Pro, a paid level of service, is also available and uses other models, including GPT-4o, Claude 3 Opus, Mistral Large, the open source LlaMa 3, and its own Sonar 32k. Professional users can also upload documents for analysis, image generation, and code interpretation.

summoned: The suggestion or question you enter into our AI-powered chatbot to get an answer.

Quick sequence: The ability of artificial intelligence to use information from past interactions to color future responses.

Random parrot: An analogy to an LLM study shows that the program does not have a greater understanding of the meaning behind language or the world around it, no matter how compelling the output. The phrase refers to how a parrot can imitate human words without understanding the meaning behind them.

Style transfer: The ability to adapt the style of one image to the content of another, allowing AI to interpret the visual attributes of one image and use them on another. For example, taking Rembrandt’s self-portrait and recreating it in Picasso’s style.

Temperature: Parameters are set to control how random the language model output is. Higher temperature means the model takes more risks.

Generate text to image: Create images based on text descriptions.

Symbols: Small pieces of written text are processed by AI language models to formulate their responses to your prompts. A token is equivalent to four letters in English, or about three-quarters of a word.

Training data: Data sets used to help AI models learn, including text, images, code, or data.

Adapter model: A neural network architecture and deep learning model that learns context by tracking relationships in data, such as sentences or parts of images. So, instead of analyzing a sentence one word at a time, he can look at the entire sentence and understand the context.

Turing test: Named after the famous mathematician and computer scientist Alan Turing, it tests a machine’s ability to behave like a human. A machine passes if a human cannot distinguish the machine’s response from another human.

Unsupervised learning: A form of machine learning where labeled training data is not provided to the model and instead the model must identify patterns in the data itself.

Weak AI, also known as Narrow AI: AI that is focused on a specific task and cannot learn beyond its own set of skills. Most AI today is weak AI.

Learning without shooting: A test in which the model must complete a task without being given the required training data. An example of this would be identifying a lion while training only tigers.





https://www.cnet.com/a/img/resize/9a13e1e92a7b66cbff9db2934b3f66bf01a4afb6/hub/2023/08/24/821b0d86-e29b-4028-ac71-ef63ca020de8/gettyimages-1472123000.jpg?auto=webp&fit=crop&height=675&width=1200

Source link

Leave a Comment