An AI literate person can be expected to evaluate AI models in terms of accuracy, bias and output limitations. AI literacy also requires the ability to identify risks and ethical implications. The base level requires knowledge of how various AI products work. An AI literate person will be able to use different AI tools effectively and communicate and work with AI systems.
Whether you’re planning to implement an AI system, develop resources for your organization or are just interested in a basic level set of fundamental AI terms and concepts, it helps to have a basic understanding. For starters, AI systems can be grouped into four categories: reactive AI, Limited Memory AI, Theory of Mind AI and Self-aware AI.
Another way to group AI systems is by how they work. All require extensive volumes of data and can work in various formats such as text, images, audio recordings, and sensor readings. AI is built on sets of instructions, or algorithms, which analyze data and identify relationships and patterns.
- Machine Learning AI has been well established and is often used in customer service functions. It’s adaptable and allows a system to improve performance over time by “learning” without explicit programming.
- Large Language Models (LLMs) are a type of AI program capable of understanding and generating human-like text. They’re “trained” on large scale volumes of text data and learn how to predict the next word in a sequence. LLMs are normally trained using self-supervised learning without explicit human labeling. They can perform language translation, text summarization, and chatbot interactions.
- Generative AI describes a wide spectrum of AI models that can create new content, not just text. It can be used for tasks like designing new products, creating articles, blog posts, social media content, product descriptions, and marketing copy.
PUBLIC OR PRIVATE AI?
The most important decision in implementing AI is whether to use a public or private system. There’s been a rapid rise in deployment of private AI based on the need to protect sensitive data. Businesses can customize private AI models to their specific needs, train them on proprietary data and ensure the AI operates within the
company’s guidelines.
Public systems such as Chat GPT and CoPilot send your data to a public endpoint on the Internet to be processed by a LLM. This is how confidential data is sometimes unintentionally made available publicly.
Public platforms allow businesses to rapidly adopt AI without requiring major IT overhauls or specialized infrastructure. And public AI scales easily because it’s built on cloud infrastructure. Public AI is usually subscription based. It offers a lower initial investment, but over the long term, usage costs can build up.
AI systems can be grouped into four categories:
Reactive AI: A system programmed to provide a predictable output based on inputs. It isn’t able to “learn” actions. These systems have been available for some time.
Limited Memory AI: This widespread type combines historical and observational data with preprogrammed data to make predictions and manage complex classification tasks. Autonomous vehicles are an example.
Theory of Mind AI: This foreseeable but not yet available version would enable a robot to look and sound like a human being. It would feel real emotions and adjust its behavior based on interactions.
Self-Aware AI: This most advanced type has yet to be developed. The system will have a concept of self-awareness along with desires, needs, and emotions.
FINDING THE BALANCE: A HYBRID STRATEGY
A hybrid strategy offers the best balance for most businesses. It organizes desired tasks into data-sensitive and non-sensitive operations. Then it uses public AI for non-sensitive purposes and data and private AI for working with confidential data.
Examples of public use AI are customer service chatbots or marketing content generation that uses publicly available material. Private AI systems might be used for financial analysis involving confidential data, employee payroll and benefits, or loyalty program member analysis involving guests’ personal data.
A.I. SECURITY RISKS
There’s so much excitement and such high expectations around AI outcomes that it’s easy to overlook the dangers of using it. Potential high-level risks include: Data breaches. Collections of training data can contain sensitive information about an organization’s customers and its business. Storing and using this information to
train AI runs the risk that it will be breached by an attacker.
Adversarial attacks. These target AI models by manipulating input data (also called prompt injection) to trick the system into making incorrect decisions or providing harmful outputs. LLMs can be tricked into participating in cybercrimes, misleading autonomous vehicle systems, or bypassing facial recognition security measures.
Data manipulation and poisoning. These attacks target the labeled data used to train AI models. They introduce additional, mislabeled instances into a data collection. The goal is to train the AI’s model incorrectly. If the training dataset contains attack traffic labeled as benign, the AI model won’t recognize those attacks. The attacker can slip past the AI system once it’s deployed.
Lack of transparency.The models AI systems use aren’t transparent or interpretable. This makes it hard to determine whether the AI’s model contains biases or errors, such as those introduced by a corrupted training dataset.
Automated malware generation. GenAI tools have protections against writing malware; however, these safeguards often have loopholes. GenAI can allow less sophisticated threat actors to develop advanced malware, and its capabilities will only grow in the future.
Model supply chain attacks. Training an AI model is a complex challenge. That means many organizations will outsource the task to third parties. Attackers can target the organizations developing the model, injecting malicious training data or taking other steps to corrupt it.
HOW TO PROTECT YOURSELF FROM THE AI RISKS
The first step is to update overall governance. Incorporate human oversight to ensure AI decisions are fair, ethical and compliant with regulations. Add AI guidance to your acceptable use of technology policy. Create IT procedures for both development and operations. Review and update third-party management policies to address AI. Implement regular security audits and ethical AI practices.
There are two essential ways to control technical risks. First, create AI guardrails. These protocols and tools make sure AI systems operate as intended. Guardrails prevent misuse, monitor innovations, and safeguard data privacy.
Second (but still a high priority): Sanitize data. Remove or anonymize sensitive information from training datasets.
Additional well-known and effective technical security risk controls applicable to AI include:
- Strong encryption for data at rest and in transit
- Data validation -- identifying and filtering malicious or corrupted data
- Robust access controls and authentication
Include these risk controls during the development of a private AI system. First, define and maintain data boundaries to prevent misaligned exposure. Then use differential privacy techniques during model development to make it harder for attackers to extract individual information. Train a new model on both normal and
adversarial examples to make them more resilient to attacks. Along with training, conduct rigorous adversarial testing to evaluate the system’s resilience against potential attacks.