Tech topics

What is Artificial Intelligence (AI)?

What Is Artificial Intelligence (AI)? banner

Overview

Artificial intelligence (AI) is the ability of a machine or computer to imitate the capabilities of the human mind. AI taps into multiple technologies to equip machines in planning, acting, comprehending, learning, and sensing with human-like intelligence. AI systems may perceive environments, recognize objects, make decisions, solve problems, learn from experience, and imitate examples. These abilities are combined to accomplish actions that would otherwise require humans to do, such as driving a car or greeting a guest.

AI for Cybersecurity

Watch this interview with Behavioral Analytics CTO, Stephan Jou, and get your answers to the burning questions on AI for Cybersecurity.

Watch now

Artificial Intelligence

Why AI is growing in popularity

Artificial intelligence may have entered everyday conversation over the last decade or so but it has been around much longer. The relatively recent rise in its prominence is not by accident.

AI technology, and especially machine learning, relies on the availability of vast volumes of information. The proliferation of the Internet, the expansion of cloud computing, the rise of smartphones, and the growth of the Internet of Things has created enormous quantities of data that grows every day. This treasure trove of information combined with the huge gains made in computing power have made the rapid and accurate processing of enormous data possible.

Today, AI is completing our chat conversations, suggesting email responses, providing driving directions, recommending the next movie we should stream, vacuuming our floors, and performing complex medical image analyses.


 

History of AI

The history of artificial intelligence goes as far back as ancient Greece. However, it’s the rise of electronic computing that made AI a real possibility. Note that what is considered AI has changed as the technology evolves. For example, a few decades ago, machines that could perform optimal character recognition (OCR) or simple arithmetic were categorized as AI. Today, OCR and basic calculations are not considered AI but rather an elementary function of a computer system.

  • 1950s – Alan Turing, a man famous for breaking the WWII ENIGMA code used by the Nazis, publishes the Computing Machinery and Intelligence paper in the Mind. He attempts to answer the question of whether machines can think. He outlines the Turing Test that determines whether a computer shows the same intelligence as a human. The test holds that an AI system should have the ability to hold a conversation with a human without the human knowing they are speaking to an AI system. The first ever AI conference is held at Dartmouth College. It’s here that the term artificial intelligence was first used.
  • 1960s – The US Department of Defense through DARPA takes great interest in AI and embarks on developing computer programs that mimic human reasoning. Frank Rosenblatt builds the Mark 1 Perceptron computer based on a neural network that learns through experience.
  • 1970s – DARPA completes various street mapping projects.
  • 1980s – A more complex wave of AI emerges. Neural networks with backpropagation algorithms find widespread application in AI systems.
  • 1990s – Exponentially growing volumes of data are produced. Powerful computers process large quantities of data quickly. The Deep Blue supercomputer defeats world chess champion Garry Kasparov twice. The Genome Sequencing project and other similarly complex undertakings generate vast information. Computing advances make it possible for this data to be stored, accessed, and analyzed.
  • 2000s – The Internet Revolution drives AI to unprecedented heights. Big data joins corporate lexicon. DARPA rolls out intelligent personal assistants long before Alexa, Siri, Cortana, and Google Assistant become household names. This paves the way for the reasoning and automation that’s a part of present day personal computers and smartphones. That includes smart search systems and decision support systems that augment and complement human abilities.
  • 2010s – China’s search giant Baidu unveils the Minwa supercomputer that relies on a convolutional neural network to identify, analyze, and categorize images with higher accuracy than the average human. The AlphaGo deep neural network program from DeepMind beats Go world champion Lee Sodol in a five-game match. Go is an ancient Chinese game that’s considerably more complex than chess.

How does AI work?

Artificial intelligence asserts that there are principles governing the actions of intelligent systems. It is based on reverse-engineering human capabilities and traits onto a machine. The system uses computational power to exceed what the average human is capable of doing. The machine must learn to respond to certain actions. It relies on historical data and algorithms to create a propensity model. Machines learn from experience to perform cognitive tasks that are ordinarily the preserve of the human brain. The system automatically learns from features or patterns in the data.

AI is founded on two pillars – engineering and cognitive science. The engineering involves building the tools that rely on human-comparable intelligence. Large volumes of data are combined with series of instructions (algorithms) and rapid iterative processing. Cognitive science involves emulating how the human brain works, and brings to AI multiple fields including machine learning, deep learning, neural networks, cognitive computing, computer vision, natural language processing, and knowledge reasoning.


 

AI systems are not monolithic

Artificial intelligence isn’t one type of system. It’s a diverse domain. There’s the simple, low-level AI systems focused on performing a specific task such as weather apps, business data analysis apps, taxi hailing apps, and digital assistants. This is the type of AI, called "Narrow AI", that the average person is most likely to interact with. Their main purpose is driving efficiency.

On the other end of the spectrum are advanced systems that emulate human intelligence at a more general level and can tackle complex tasks. These include thinking creatively, abstractly, and strategically. Strictly speaking, this kind of truly sentient machine, called "Artificial General Intelligence" or AGI, only exists on the silver screen for now, though the race toward its realization is accelerating.


 

Where is Artificial Intelligence used?

Humans have pursued artificial intelligence in recognition of how invaluable it can be for business innovation and digital transformation. AI can cut costs and introduce levels of speed, scalability, and consistency that is otherwise out of reach. You probably interact with some form of AI multiple times each day. The applications of AI are too numerous to exhaustively cover here. Here’s a high level look at some of the most significant ones.

1. Cybersecurity

As cyberattacks grow in scale, sophistication, and frequency, human-dependent cyber defenses are no longer adequate. Traditionally, anti-malware applications were built with specific threats in mind. Virus signatures would be updated as new malware was identified.

But keeping up with the sheer number and diversity of threats eventually becomes a near impossible task. This approach was reactive and depended on the identification of a specific malware for it to be added to the next update.

AI-based anti-spam, firewall, intrusion detection/prevention, and other cybersecurity systems go beyond the archaic rule-based strategy. Real-time threat identification, analysis, mitigation, and prevention is the name of the game. They deploy AI systems that detect malware traits and take remedial action even without the formal identification of the threat.

AI cybersecurity systems rely on the continuous feed of data to recognize patterns and backtrack attacks. By feeding algorithms large volumes of information, these systems learn how to detect anomalies, monitor behavior, respond to threats, adapt to attack, and issue alerts.

2. Speech recognition and natural language processing

Also referred to as speech-to-text (STT), speech recognition is technology that recognizes speech and converts it into digital text. It’s at the heart of computer dictation apps, as well as voice-enabled GPS and voice-driven call answering menus.

Natural language processing (NLP) relies on a software application to decipher, interpret, and generate human-readable text. NLP is the technology behind Alexa, Siri, chatbots, and other forms of text-based assistants. Some NLP systems use sentiment analysis to make out the attitude, mood, and subjective qualities in a language.

3. Image recognition

Also known as machine vision or computer vision, image recognition is artificial intelligence that allows one to classify and identify people, objects, text, actions, and writing occurring within moving or still images. Usually powered by deep neural networks, image recognition has found application in self-driving cars, medical image/video analysis, fingerprint identification systems, check deposit apps, and more.

4. Real-time recommendations

E-commerce and entertainment websites/apps leverage neural networks to recommend products and media that will appeal to the customer based on their past activity, the activity of similar customers, the season, the weather, the time of day, and more. These real-time recommendations are customized to each user. For e-commerce sites, recommendations not only grow sales but also help optimize inventory, logistics, and store layout.

5. Automated stock trading

The stock market can be extremely volatile in times of crisis. Billions of dollars in market value may be wiped out in seconds. An investor who was in a highly profitable position one minute could find themselves deep in the red shortly thereafter. Yet, it’s near impossible for a human to react quick enough to market-influencing events. High-frequency trading (HFT) systems are AI-driven platforms that make thousands or millions of automated trades per day to maintain stock portfolio optimization for large institutions.

6. Ride-sharing services and self-driving cars

Lyft, Uber, and other ride-share apps use AI to connect requesting riders to available drivers. AI technology minimizes detours and wait times, provides realistic ETAs, and deploys surge-pricing during spikes in demand.

Self-driving cars are not yet standard in most of the world but there’s already been a concerted push to embed AI-based safety functions to detect dangerous scenarios and prevent accidents.

7. Autopilot technology

Unlike land-based vehicles, the margin for error in aircraft is extremely narrow. Given the altitude, a small miscalculation may lead to hundreds of fatalities. Aircraft manufacturers had to push safety systems and become one of the earliest adopters of artificial intelligence.

To minimize the likelihood and impact of human error, autopilot systems have been flying military and commercial aircraft for decades. They use a combination of GPS technology, sensors, robotics, image recognition, and collision avoidance to navigate planes safely through the sky while keeping pilots and ground crew updated as needed.

8. Software test automation

Artificial Intelligence accelerates and simplifies test creation, execution, and maintenance through AI-powered intelligent test automation. AI-based machine learning and advanced optical character recognition (OCR) provide for advanced object recognition, and when combined with AI-based mockup identification, AI-based recording, AI-based text matching, and image-based automation, teams can reduce test creation time and test maintenance efforts, and boost test coverage and resilience of testing assets.

9. Functional testing

Artificial intelligence allows you to test earlier and faster with functional testing solutions. Combine extensive technology support with AI-driven capabilities. Deliver the speed and resiliency that supports rapid application changes within a continuous delivery pipeline.

10. Enterprise service management

Both IT and business face the challenges of too many manual, error-prone workflows, an ever-increasing volume of requests, employees dissatisfied with the level and quality of service, and more. Artificial Intelligence and machine learning technology can take service management to the next level:

  • Smart search capabilities enable employees to find answers easily and quickly
  • Virtual agents or bots can perform tasks using natural language processing (NLP)
  • Intelligent analytics enable workflow optimization and automation
  • Metrics from unstructured data, for example user surveys, can be gathered and analyzed more efficiently.

Read How AI is enabling Enterprise Service Management from the resource list below for more thoughts and information on the role of artificial intelligence (AI) in the adoption and expansion of enterprise service management (ESM).

What is true of IT support, is also true for ESM; AI makes operations and outcomes better. To find out more read Ten tips for empowering your IT support with AI.

11. Robotic process automation (RPA)

Robotic process automation (RPA) uses software robots that mimic screen-based human actions to perform repetitive tasks and extend automation to interfaces with difficult or no application programming interfaces (APIs). That’s why RPA is perfect for automating processes typically completed by humans or that require human intervention. Resilient robots adapt to screen changes and keep processes flowing when change happens. When powered by AI-based machine learning, RPA robots identify screen objects – even ones they haven’t seen before – and emulate human intuition to determine their functions. They use OCR to read text (for example, text boxes and links) and computer vision to read visual elements (for example, shopping cart icons and login buttons). When a screen object changes, robots adapt. Machine learning drives them to continuously improve how they see and interact with screen objects – just like a human would.

Get started with Artificial Intelligence. Get ahead

There are plenty of ways you could leverage artificial intelligence for your business to stay competitive, drive growth, and unlock value. Nevertheless, your organization doesn’t possess infinite resources. You must prioritize. Begin by defining what your organization’s values and strategic objectives are. From that point, assess the possible applications of AI against these values and objectives. Choose the AI technology that is bound to deliver the biggest impact for the business.

The world is only going to grow more AI-dependent. It’s no longer about whether to adopt AI but when. Organizations that tap into AI ahead of their peers could gain a significant competitive advantage. Developing and pursuing a well-defined AI strategy is where it all begins. It may take a bit of experimenting before you know what will work for you.

Footnotes