Amazon cover image
Image from Amazon.com

AI, machine learning and deep learning a security perspective edited by Fei Hu and Xiali Hei.

By: Contributor(s): Material type: TextTextPublication details: New York CRC Press 2023Edition: First editionDescription: xi, 313 pages.: Ill 26 cmISBN:
  • 9781032034041
Subject(s): Additional physical formats: Online version:: AI, machine learning and deep learning.DDC classification:
  • 006.3 23 HEI-A 2023 790150
Partial contents:
Machine learning attack models / Jing Lin, Long Dang, Mohamed Rahouti, Kaiqi Xiong -- Adversarial machine learning : a new threat paradigm for next-generation wireless communications / Yalin E. Sagduyu, Yi Shi, Tugba Erpek, William Headley, Bryse Flowers, George Stantchev, Zhuo Lu, and Brian Jalaian -- Threat of adversarial attacks to deep learning : a survey / Linsheng He, Fei Hu.
Summary: "Today Artificial Intelligence (AI) and Machine/Deep Learning (ML/DL) have become the hottest areas in the information technology. In our society, there are so many intelligent devices that rely on AI/ML/DL algorithms/tools for smart operations. Although AI/ML/DL algorithms/tools have used in many Internet applications and electronic devices, they are also vulnerable to various attacks and threats. The AI parameters may be distorted by the internal attacker; the DL input samples may be polluted by adversaries; the ML model may be misled by changing the classification boundary, and many other attacks/threats. Those attacks make the AI products dangerous to use. While the above discussion focuses on the security issues in AI/ML/DL-based systems (i.e., securing the intelligent systems themselves), AI/ML/DL models/algorithms can be used for cyber security (i.e., use AI to achieve security). Since the AI/ML/DL security is a new emergent field, many researchers and industry people cannot obtain detailed, comprehensive understanding of this area. This book aims to provide a complete picture on the challenges and solutions to the security issues in various applications. It explains how different attacks can occur in advanced AI tools and the challenges of overcoming those attacks. Then many sets of promising solutions are described to achieve AI security and privacy in this book. The features of this book consist of 7 aspects: This is the first book to explain various practical attacks and countermeasures to AI systems; Both quantitative math models and practical security implementations are provided; It covers both "securing the AI system itself" and "use AI to achieve security"; It covers all the advanced AI attacks and threats with detailed attack models; It provides the multiple solution spaces to the security and privacy issues in AI tools; The differences among ML and DL security/privacy issues are explained. Many practical security applications are covered"--
Tags from this library: No tags from this library for this title. Log in to add tags.
Star ratings
    Average rating: 0.0 (0 votes)
Holdings
Item type Current library Collection Call number Copy number Status Date due Barcode Item holds
Reference Reference Faculty of CS & IT Library Book Cart Book 006.3 HEI-A 2023 790150 (Browse shelf(Opens below)) 1 Not For Loan (Restricted Access) 790150
Books Books Faculty of CS & IT Library Book Cart Book 006.3 HEI-A 2023 790151 (Browse shelf(Opens below)) 2 Available 790151
Books Books Faculty of CS & IT Library Book Cart Book 006.3 HEI-A 2023 790152 (Browse shelf(Opens below)) 3 Available 790152
Total holds: 0

Includes bibliographical references and index.

Machine learning attack models / Jing Lin, Long Dang, Mohamed Rahouti, Kaiqi Xiong -- Adversarial machine learning : a new threat paradigm for next-generation wireless communications / Yalin E. Sagduyu, Yi Shi, Tugba Erpek, William Headley, Bryse Flowers, George Stantchev, Zhuo Lu, and Brian Jalaian -- Threat of adversarial attacks to deep learning : a survey / Linsheng He, Fei Hu.

"Today Artificial Intelligence (AI) and Machine/Deep Learning (ML/DL) have become the hottest areas in the information technology. In our society, there are so many intelligent devices that rely on AI/ML/DL algorithms/tools for smart operations. Although AI/ML/DL algorithms/tools have used in many Internet applications and electronic devices, they are also vulnerable to various attacks and threats. The AI parameters may be distorted by the internal attacker; the DL input samples may be polluted by adversaries; the ML model may be misled by changing the classification boundary, and many other attacks/threats. Those attacks make the AI products dangerous to use. While the above discussion focuses on the security issues in AI/ML/DL-based systems (i.e., securing the intelligent systems themselves), AI/ML/DL models/algorithms can be used for cyber security (i.e., use AI to achieve security). Since the AI/ML/DL security is a new emergent field, many researchers and industry people cannot obtain detailed, comprehensive understanding of this area. This book aims to provide a complete picture on the challenges and solutions to the security issues in various applications. It explains how different attacks can occur in advanced AI tools and the challenges of overcoming those attacks. Then many sets of promising solutions are described to achieve AI security and privacy in this book. The features of this book consist of 7 aspects: This is the first book to explain various practical attacks and countermeasures to AI systems; Both quantitative math models and practical security implementations are provided; It covers both "securing the AI system itself" and "use AI to achieve security"; It covers all the advanced AI attacks and threats with detailed attack models; It provides the multiple solution spaces to the security and privacy issues in AI tools; The differences among ML and DL security/privacy issues are explained. Many practical security applications are covered"--

Copyrights 2018© The University of Lahore (UOL) Libraries. All Rights Reserved. Library System Administrator Muhammad Riaz (muhammad.riaz@uol.edu.pk) +92 (0)42 35963421-30 Ext: 1703

Powered by Koha