Artificial Intelligence is here. AI is already being used in development, testing, tool development, and products. Its use will undoubtedly grow and become more pervasive.
At LogiGear Magazine we regularly include a glossary of terms on the issue topic. In this issue, the glossary is unique.
If you are just beginning to learn about AI there are a whole lot of unique phrases in understanding to grasp the terms and possibilities of AI. The goal here is to maximize your understanding of AI and Machine Learning so that you can do a few things:
- See where AI could apply to your testing practice. For example, using AI to predict where bugs may lie in wait. Knowing this could help you build data, bug tracking information, or bugs traced to the chunks of code where the fix is to help build an AI bug prediction system.
- Understand AI enough so that when you begin using an AI-based Automation tool, or AI-based Test Automation helper tool, the more you understand about AI, you can think of more ways to apply the tool or increase its use.
- If you are testing an AI or Machine Learning app, you will need a great understanding of how AI works. Testing AI systems is far different from virtually all systems you may be testing already.
- For the testing of AI or Machine Learning, part of Machine Learning is that with use, the machine learns more or predicts more. With a certain input or data into an algorithm today, the machine will learn and give you a different answer tomorrow. Most of us do deterministic testing. A+B=C. We expect C to be the answer based on some information such as a requirement/user story or subject matter expertise. With Machine Learning, today A+B may =C, but tomorrow it may =D. To understand this, see the word, non-deterministic, in the following glossary.
Artificial Intelligence Terms
Intelligence demonstrated by machines, in contrast to the natural intelligence displayed by humans and other animals. Colloquially, the term “artificial intelligence” is applied when a machine mimics “cognitive” functions that humans associate with other human minds, such as “learning” and “problem solving”.
Artificial Neural Networks
Connectionist systems computing systems vaguely inspired by the biological neural networks that constitute animal brains. The neural network itself is not an algorithm, but rather a framework for many different machine learning algorithms to work together and process complex data inputs.
Machines that can substitute for humans and replicate human actions. Robots can be used in many situations and for lots of purposes, but today many are used in dangerous environments (including bomb detection and deactivation), manufacturing processes, or where humans cannot survive (e.g. in space). Robots can take on any form but some are made to resemble humans in appearance.
The process of discovering patterns in large data sets involving methods at the intersection of machine learning, statistics, and database systems.
Data science is an interdisciplinary field that uses scientific methods, processes, algorithms, and systems to extract knowledge and insights from data in various forms, both structured and unstructured, similar to data mining
Part of a broader family of machine learning methods based on learning data representations, as opposed to task-specific algorithms. Learning can be supervised, semi-supervised, or unsupervised.
An algorithm that, given a particular input, will always produce the same output, with the underlying machine always passing through the same sequence of states.
Most commonly found in documents communicated by stakeholders to the development team. They might take the form of an elaborate design specification, a set of acceptance criteria, or a set of wireframes.
All the things that users are going to expect that were not captured explicitly. Examples include performance, usability, availability, and security.
Represent behaviors that users do not expect based on their previous experiences but which will make them like the software more.
An algorithm that, even for the same input, can exhibit different behaviors on different runs, as opposed to a deterministic algorithm.
The study of algorithms and mathematical models that computer systems use to progressively improve their performance on a specific task. Most machine learning systems are based on neural networks, or sets of layered algorithms whose variables can be adjusted via a learning process
Encompasses a variety of statistical techniques from data mining, predictive modelling, and machine learning, that analyze current and historical facts to make predictions about future or otherwise unknown events
A growing list of records, called blocks, which are linked using cryptography. Each block contains a cryptographic hash of the previous block, a timestamp, and transaction data.
DApp (Decentralized Application)
Applications that run on a P2P network of computers rather than a single computer.
An open-source, public, blockchain-based distributed computing platform and operating system featuring smart contract functionality.
Ethereum Improvement Proposals
Describes standards for the Ethereum platform, including core protocol specifications, client APIs, and contract standards.
A computer protocol intended to digitally facilitate, verify, or enforce the negotiation or performance of a contract. Smart contracts allow the performance of credible transactions without third parties.
The process of automating the build and testing of code every time a team member commits changes to version control.
The process of executing automated tests as part of the software delivery pipeline in order to obtain feedback on the business risks associated with a software release candidate as rapidly as possible.
This is a lot. We hope this will help broaden your knowledge of AI and the associated issues so you will be on board and ready to take full advantage for your inevitable work using AI as part of your development and testing practice or, if you are lucky, developing and testing AI products.