Initial Idea 1: Artificial Intelligence

introduction:

Artificial Intelligence (AI) is: computers or robots are controlled by computers and can do tasks that are usually performed by humans due to their intelligence and discernment. AI is where the capabilities of the human brain are imparted in a computer-based application or software. With the help of cognitive and logical thinking, AI can seamlessly perform different operations to develop knowledge and reasoning similar to the human brain. Eventually, AI has progressed to such an extent where AI-based mobile and web applications are capable of performing human-based applications in real-time.

What is Artificial Intelligence?

Artificial intelligence differs from natural intelligence in the way it has special properties not normally found in other vehicles. Artificial Intelligence(AI) is the study of science and engineering with which intelligent machines and computer programs are manufactured. The idea of artificial intelligence is based on the very human intelligence, except for natural intelligence, artificial intelligence does not have biologically observable limitations, so it can train on recorded information and be suitable for revealing examples, learning from models, and anticipating future results. AI would not have the option to investigate at comparative speed and scale.  Expectations and arrangements are speculations based on large datasets that people would not be able to examine (PK, F.A., 1984)

How long has it been around?

The first artificial intelligence was created in 1956. John McCarthy defined the term first and also it implemented in LISP language which means List Processing is used nowadays also. The first international conference on Artificial intelligence in 1970 was held in Washington
From 1957 to 1974, AI flourished. Computers could store more information and become faster, cheaper, and more accessible. Machine learning algorithms also improved and people got better at knowing which algorithm to apply to their problems. Early demonstrations such as Newell and Simon’s General Problem Solver and Joseph Weizenbaum’s ELIZA showed promise toward the goals of problem-solving and the interpretation of spoken language respectively. These successes, as well as the advocacy of leading researchers (namely the attendees of the DSRPAI), convinced government agencies such as the Defense Advanced Research Projects Agency (DARPA) to fund AI research at several institutions. The government was particularly interested in a machine that could transcribe and translate spoken language as well as high throughput data processing. Optimism was high and expectations were even higher. In 1970 Marvin Minsky told Life Magazine, “from three to eight years we will have a machine with the general intelligence of an average human being.” However, while the basic proof of principle was there, there was still a long way to go before the end goals of natural language processing, abstract thinking, and self-recognition could be achieved (Harvard University, 2022).

Major sub-fields of AI: include: (Chowdhury, M. and Sadek, A.W., 2012)
  •  Machine Learning: Artificial intelligence is the capability of a computer system to mimic human cognitive functions such as learning and problem-solving. Through AI, a computer system uses math and logic to simulate the reasoning that people use to learn from new information and make decisions.
  • Neural Networks: is a method in artificial intelligence that teaches computers to process data in a way that is inspired by the human brain. It is a type of machine learning process, called deep learning, that uses interconnected nodes ( a basic unit of a data structure) or neurons in a layered structure that resembles the human brain.
  • Evolutionary Computation: Evolutionary computation is a sub-field of artificial intelligence (AI) and is used extensively in complex optimization problems and for continuous optimization. Evolutionary computation is used to solve problems that have too many variables for traditional algorithms.
  •  Vision AI (also known as Computer Vision): is a field of computer science that trains computers to replicate the human vision system. This enables digital devices (like face detectors, and QR Code Scanners) to identify and process objects in images and videos, just like humans do.
  •  Robotics: Artificial intelligence (AI) and robotics are a powerful combination for automating tasks inside and outside of the factory setting. In recent years, AI has become an increasingly common presence in robotic solutions, introducing flexibility and learning capabilities in previously rigid applications.
  •  Expert Systems: is a computer program that uses artificial intelligence (AI) technologies to simulate the judgment and behavior of a human or an organization that has expert knowledge and experience in a particular field.
  •  Speech Processing: is a discipline of computer science that deals with designing computer systems that recognize spoken words.
  •  Natural Language Processing(NLP): is a branch of artificial intelligence within computer science that focuses on helping computers to understand the way that humans write and speak.
  • AI Planning: is a field of Artificial Intelligence that explores the process of using autonomous techniques to solve planning and scheduling problems. A planning problem is one in which we have some initial starting state, which we wish to transform into the desired goal state through the application of a set of actions.

Why Artificial Intelligence?

Artificial intelligence (AI) is pushing the boundaries of machine-enabled functionalities. This bleeding-edge technology facilitates machines to act with a degree of autonomy, resulting in the effective execution of iterative tasks.

AI facilitates the creation of a next-generation workplace that thrives on seamless collaboration between the enterprise system and individuals. Therefore, human resources are not made obsolete, but rather, their efforts are bolstered by emerging tech. In fact, AI provides organizations with the luxury of freeing up resources for higher-level tasks.

The following are the primary advantages of AI:

  • AI drives down the time taken to perform a task. It enables multi-tasking and eases the workload for existing resources.
  • AI enables the execution of hitherto complex tasks without significant cost outlays.
  • AI operates 24x7 without interruption or breaks and has no downtime
  • AI augments the capabilities of differently-abled individuals
  • AI has mass-market potential, it can be deployed across industries.
  • AI facilitates decision-making by making the process faster and smarter.
Artificial Intelligence and cyber security:

cyber security is by far the area that apply AI the most . AI is used in cyber security to quickly analyze millions of events and identify many different types of threats – from malware exploiting zero-day vulnerabilities to identifying risky behavior that might lead to a phishing attack or download of malicious code. This technology is a self-learning system that automatically and continuously gathers data from across your enterprise information systems. This data is then analyzed and used to perform a correlation of patterns across millions to billions of signals relevant to the enterprise attack surface to identify new types of attacks. (Li, J.H., 2018)
With today's ever-evolving cyber-attacks and proliferation of devices, analyzing and solving cybersecurity posture is not a human-scale problem anymore. Artificial Intelligence (AI) based tools for cybersecurity have emerged to help information security teams reduce breach risk by providing real-time monitoring of the attack surface, improving security threat detection, and prompting action so your security team can “keep up with the bad guys”. Given its benefits, AI-based cybersecurity posture management systems are growing in popularity as more organizations are recognizing their ability to automate threat detection and predict cyberattacks with matchless precision. (Zhang, Z, el at, 2021)
8 Benefits of Using AI for Cybersecurity:

AI is the ideal cybersecurity solution for businesses looking to thrive online today. Security professionals need strong support from intelligent machines and advanced technologies like AI to work successfully and protect their organizations from cyber-attacks. This is the top uses of AI ion Cybersecurity: (Prasad, R. and Rohokale, V., 2020)
  • AI Learns More Over Time
  • Artificial Intelligence Identifies Unknown Threats
  • AI Can Handle a Lot of Data
  • Better Vulnerability Management
  • Better Overall Security
  • Duplicative Processes Reduce.
  • Accelerates Detection and Response Times
  • Securing Authentication
Rise of concerns about AI: reflections and directions:

The following list enumerates all the ethical issues that were identified from the case studies and the Delphi study, totaling 39:(Dietterich, T.G. and Horvitz, E.J., 2015)

Cost to innovation                                                                           Harm to physical integrity

Lack of access to public services                                                Lack of trust 

“Awakening” of AI                                                                            Security problems

Lack of quality data                                                                        Disappearance of jobs

Power asymmetries                                                                       Negative impact on health

Problems of integrity                                                                      Lack of accuracy of data

Lack of privacy                                                                                  Lack of transparency

Potential for military use                                                                Lack of informed consent

Bias and discrimination                                                                 Unfairness

Unequal power relations                                                               Misuse of personal data

Negative impact on justice system                                              Negative impact on democracy

Potential for criminal and malicious use                                  Loss of freedom and individual autonomy

Contested ownership of data                                                       Reduction of human contact

Problems of control and use of data and systems            Lack of accuracy of predictive recommendations

Lack of accuracy of non-individual recommendations         Concentration of economic power

Violation of fundamental human rights in supply chain      Negative impact on environment 

Violation of fundamental human rights of end users           Loss of human decision-making

Unintended, unforeseeable adverse impacts                        Prioritisation of the “wrong” problems

Negative impact on vulnerable groups                                  Lack of accountability and liabilit                    

Lack of access to and freedom of information

reference list

PK, F.A., 1984. What is Artificial Intelligence?. “Success is no accident. It is hard work, perseverance, learning, studying, sacrifice and most of all, love of what you are doing or learning to do”., p.65.

Dick, S., 2019. Artificial intelligence.

Science in the News. 2022. The History of Artificial Intelligence - Science in the News. [online] Available at: <https://sitn.hms.harvard.edu/flash/2017/history-artificial-intelligence/> [Accessed 7 May 2022].

Arulkumaran, K., Cully, A. and Togelius, J., 2019, July. Alphastar: An evolutionary computation perspective. In Proceedings of the genetic and evolutionary computation conference companion (pp. 314-315).

Chowdhury, M. and Sadek, A.W., 2012. Advantages and limitations of artificial intelligence. Artificial intelligence applications to critical transportation issues6(3), pp.360-375.

Li, J.H., 2018. Cyber security meets artificial intelligence: a survey. Frontiers of Information Technology & Electronic Engineering19(12), pp.1462-1474.

Zhang, Z., Ning, H., Shi, F., Farha, F., Xu, Y., Xu, J., Zhang, F. and Choo, K.K.R., 2021. Artificial intelligence in cyber security: research advances, challenges, and opportunities. Artificial Intelligence Review, pp.1-25.

Prasad, R. and Rohokale, V., 2020. Artificial intelligence and machine learning in cyber security. In Cyber Security: The Lifeline of Information and Communication Technology (pp. 231-247). Springer, Cham.

Dietterich, T.G. and Horvitz, E.J., 2015. Rise of concerns about AI: reflections and directions. Communications of the ACM58(10), pp.38-40.








Commentaires

Posts les plus consultés de ce blog

Software/Hardware skills

Software: Evaluation

Survey Analysis: Continued